Test Report: QEMU_macOS 17086

                    
                      9a32fbe416941fe3be1e8bb0a72042cc4c15bbaa:2023-08-23:30696
                    
                

Test fail (91/255)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 24.83
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 10.13
24 TestAddons/parallel/Registry 720.86
25 TestAddons/parallel/Ingress 0.73
27 TestAddons/parallel/MetricsServer 720.78
30 TestAddons/parallel/CSI 720.84
32 TestAddons/parallel/CloudSpanner 820.39
37 TestCertOptions 9.95
38 TestCertExpiration 195.36
39 TestDockerFlags 10.34
40 TestForceSystemdFlag 12.05
41 TestForceSystemdEnv 10.08
86 TestFunctional/parallel/ServiceCmdConnect 28.65
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.17
153 TestImageBuild/serial/BuildWithBuildArg 1.09
162 TestIngressAddonLegacy/serial/ValidateIngressAddons 58
197 TestMountStart/serial/StartWithMountFirst 10.54
200 TestMultiNode/serial/FreshStart2Nodes 10.08
201 TestMultiNode/serial/DeployApp2Nodes 118.31
202 TestMultiNode/serial/PingHostFrom2Pods 0.08
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/ProfileList 0.17
205 TestMultiNode/serial/CopyFile 0.06
206 TestMultiNode/serial/StopNode 0.14
207 TestMultiNode/serial/StartAfterStop 0.11
208 TestMultiNode/serial/RestartKeepsNodes 5.37
209 TestMultiNode/serial/DeleteNode 0.1
210 TestMultiNode/serial/StopMultiNode 0.15
211 TestMultiNode/serial/RestartMultiNode 5.25
212 TestMultiNode/serial/ValidateNameConflict 20.55
216 TestPreload 10.23
218 TestScheduledStopUnix 10.65
219 TestSkaffold 13.44
222 TestRunningBinaryUpgrade 123.74
224 TestKubernetesUpgrade 15.41
237 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.43
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.13
239 TestStoppedBinaryUpgrade/Setup 161.24
241 TestPause/serial/Start 9.91
251 TestNoKubernetes/serial/StartWithK8s 9.89
252 TestNoKubernetes/serial/StartWithStopK8s 5.47
253 TestNoKubernetes/serial/Start 5.47
257 TestNoKubernetes/serial/StartNoArgs 5.46
259 TestNetworkPlugins/group/auto/Start 9.86
260 TestNetworkPlugins/group/calico/Start 9.63
261 TestNetworkPlugins/group/custom-flannel/Start 9.73
262 TestNetworkPlugins/group/false/Start 9.84
263 TestNetworkPlugins/group/kindnet/Start 9.67
264 TestNetworkPlugins/group/flannel/Start 9.73
265 TestNetworkPlugins/group/enable-default-cni/Start 9.75
266 TestStoppedBinaryUpgrade/Upgrade 2.49
267 TestNetworkPlugins/group/bridge/Start 9.89
268 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
269 TestNetworkPlugins/group/kubenet/Start 9.68
271 TestStartStop/group/old-k8s-version/serial/FirstStart 9.99
273 TestStartStop/group/no-preload/serial/FirstStart 10.13
274 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
275 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
278 TestStartStop/group/old-k8s-version/serial/SecondStart 7.07
279 TestStartStop/group/no-preload/serial/DeployApp 0.09
280 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
283 TestStartStop/group/no-preload/serial/SecondStart 5.2
284 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
285 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
286 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
287 TestStartStop/group/old-k8s-version/serial/Pause 0.1
288 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
289 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
290 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
292 TestStartStop/group/embed-certs/serial/FirstStart 9.88
293 TestStartStop/group/no-preload/serial/Pause 0.12
295 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.45
296 TestStartStop/group/embed-certs/serial/DeployApp 0.1
297 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
300 TestStartStop/group/embed-certs/serial/SecondStart 7.02
301 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
305 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.2
306 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
307 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
308 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
309 TestStartStop/group/embed-certs/serial/Pause 0.1
310 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
311 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
312 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
314 TestStartStop/group/newest-cni/serial/FirstStart 9.78
315 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
320 TestStartStop/group/newest-cni/serial/SecondStart 5.25
323 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
324 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (24.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-861000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-861000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (24.832161916s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c552a0eb-21d4-49a4-995a-e00477ecc8bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-861000] minikube v1.31.2 on Darwin 13.5.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a9a55e8e-2755-486f-90b7-0fade65ea0a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17086"}}
	{"specversion":"1.0","id":"00b36e69-1dd9-41c7-b0f6-849c27b373e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig"}}
	{"specversion":"1.0","id":"269e4712-809d-4330-8436-cd70037d1e44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"84f875eb-d012-4c0d-ae79-9d69a72286d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"10fbeec0-452c-411a-884c-bc9eb005a970","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube"}}
	{"specversion":"1.0","id":"e6f24d50-440b-42af-90d0-7f2ffea94bcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"d9a38961-4d8a-4076-8d57-f1d265366ba8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0844a81c-65d6-4517-9f28-64b45aa53101","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"87aeee39-aca4-4ae6-b858-a9175aa988d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed8a5c6a-63fb-40bd-ad44-8a69748f2c66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8cfb97c8-1cdb-4f0e-bc31-2af71f6b95fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-861000 in cluster download-only-861000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"79bb2c7c-3798-4fce-a0bb-fcb1ca86927a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2cf81d7c-237b-4c31-b407-ddadddb534c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8] Decompressors:map[bz2:0x1400053dd40 gz:0x1400053dd48 tar:0x1400053dcf0 tar.bz2:0x1400053dd00 tar.gz:0x1400053dd10 tar.xz:0x1400053dd20 tar.zst:0x1400053dd30 tbz2:0x1400053dd00 tgz:0x140005
3dd10 txz:0x1400053dd20 tzst:0x1400053dd30 xz:0x1400053dd50 zip:0x1400053dd60 zst:0x1400053dd58] Getters:map[file:0x140010466a0 http:0x14001084140 https:0x14001084190] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"9f0806d5-1d81-42e3-a52d-079a45e3df0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:11:21.954622    1556 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:11:21.954763    1556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:11:21.954767    1556 out.go:309] Setting ErrFile to fd 2...
	I0823 11:11:21.954769    1556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:11:21.954879    1556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	W0823 11:11:21.954939    1556 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17086-1133/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17086-1133/.minikube/config/config.json: no such file or directory
	I0823 11:11:21.956120    1556 out.go:303] Setting JSON to true
	I0823 11:11:21.972641    1556 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":654,"bootTime":1692813627,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:11:21.972697    1556 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:11:21.978077    1556 out.go:97] [download-only-861000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:11:21.981087    1556 out.go:169] MINIKUBE_LOCATION=17086
	I0823 11:11:21.978236    1556 notify.go:220] Checking for updates...
	W0823 11:11:21.978268    1556 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball: no such file or directory
	I0823 11:11:21.988000    1556 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:11:21.991097    1556 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:11:21.994075    1556 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:11:21.997043    1556 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	W0823 11:11:22.003026    1556 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0823 11:11:22.003219    1556 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:11:22.008076    1556 out.go:97] Using the qemu2 driver based on user configuration
	I0823 11:11:22.008097    1556 start.go:298] selected driver: qemu2
	I0823 11:11:22.008101    1556 start.go:902] validating driver "qemu2" against <nil>
	I0823 11:11:22.008160    1556 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 11:11:22.011037    1556 out.go:169] Automatically selected the socket_vmnet network
	I0823 11:11:22.014809    1556 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0823 11:11:22.014880    1556 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0823 11:11:22.014936    1556 cni.go:84] Creating CNI manager for ""
	I0823 11:11:22.014953    1556 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0823 11:11:22.014958    1556 start_flags.go:319] config:
	{Name:download-only-861000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-861000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:11:22.020621    1556 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:11:22.024167    1556 out.go:97] Downloading VM boot image ...
	I0823 11:11:22.024198    1556 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso
	E0823 11:11:22.440748    1556 iso.go:90] Unable to download https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso: getter: &{Ctx:context.Background Src:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso.sha256 Dst:/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso.download Pwd: Mode:2 Umask:---------- Detectors:[0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8] Decompressors:map[bz2:0x1400053dd40 gz:0x1400053dd48 tar:0x1400053dcf0 tar.bz2:0x1400053dd00 tar.gz:0x1400053dd10 tar.xz:0x1400053dd20 tar.zst:0x1400053dd30 tbz2:0x1400053dd00 tgz:0x1400053dd10 txz:0x1400053dd20 tzst:0x1400053dd30 xz:0x1400053dd50 zip:0x1400053dd60 zst:0x1400053dd58] Getters:map[file:0x14000fe1480 http:0x14000da9b80 https:0x14000da9bd0] Dir:false ProgressListener:<nil> Insecure:false Di
sableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	I0823 11:11:22.440917    1556 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:11:22.446466    1556 out.go:97] Downloading VM boot image ...
	I0823 11:11:22.446715    1556 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-arm64.iso?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso
	I0823 11:11:30.978336    1556 out.go:97] Starting control plane node download-only-861000 in cluster download-only-861000
	I0823 11:11:30.978364    1556 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0823 11:11:31.073961    1556 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0823 11:11:31.074044    1556 cache.go:57] Caching tarball of preloaded images
	I0823 11:11:31.074248    1556 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0823 11:11:31.079365    1556 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0823 11:11:31.079374    1556 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0823 11:11:31.294710    1556 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0823 11:11:45.710738    1556 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0823 11:11:45.710887    1556 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0823 11:11:46.353227    1556 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0823 11:11:46.353431    1556 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/download-only-861000/config.json ...
	I0823 11:11:46.353453    1556 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/download-only-861000/config.json: {Name:mka0c26ae17d68b5ee8df37dd23c2ce5da5a439c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:11:46.353679    1556 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0823 11:11:46.353906    1556 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0823 11:11:46.715567    1556 out.go:169] 
	W0823 11:11:46.719511    1556 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8] Decompressors:map[bz2:0x1400053dd40 gz:0x1400053dd48 tar:0x1400053dcf0 tar.bz2:0x1400053dd00 tar.gz:0x1400053dd10 tar.xz:0x1400053dd20 tar.zst:0x1400053dd30 tbz2:0x1400053dd00 tgz:0x1400053dd10 txz:0x1400053dd20 tzst:0x1400053dd30 xz:0x1400053dd50 zip:0x1400053dd60 zst:0x1400053dd58] Getters:map[file:0x140010466a0 http:0x14001084140 https:0x14001084190] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0823 11:11:46.719546    1556 out_reason.go:110] 
	W0823 11:11:46.726507    1556 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 11:11:46.731514    1556 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-861000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (24.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-950000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-950000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.993696459s)

                                                
                                                
-- stdout --
	* [offline-docker-950000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-950000 in cluster offline-docker-950000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-950000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:57:20.638007    3828 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:57:20.638116    3828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:57:20.638118    3828 out.go:309] Setting ErrFile to fd 2...
	I0823 11:57:20.638120    3828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:57:20.638266    3828 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:57:20.639494    3828 out.go:303] Setting JSON to false
	I0823 11:57:20.656236    3828 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3414,"bootTime":1692813626,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:57:20.656315    3828 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:57:20.661082    3828 out.go:177] * [offline-docker-950000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:57:20.668014    3828 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:57:20.668019    3828 notify.go:220] Checking for updates...
	I0823 11:57:20.674960    3828 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:57:20.678033    3828 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:57:20.681019    3828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:57:20.684001    3828 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:57:20.687016    3828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:57:20.690339    3828 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:57:20.690387    3828 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:57:20.693941    3828 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 11:57:20.705047    3828 start.go:298] selected driver: qemu2
	I0823 11:57:20.705055    3828 start.go:902] validating driver "qemu2" against <nil>
	I0823 11:57:20.705062    3828 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:57:20.706917    3828 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 11:57:20.709928    3828 out.go:177] * Automatically selected the socket_vmnet network
	I0823 11:57:20.713067    3828 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 11:57:20.713091    3828 cni.go:84] Creating CNI manager for ""
	I0823 11:57:20.713098    3828 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:57:20.713102    3828 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 11:57:20.713108    3828 start_flags.go:319] config:
	{Name:offline-docker-950000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:offline-docker-950000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0}
	I0823 11:57:20.717175    3828 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:57:20.721978    3828 out.go:177] * Starting control plane node offline-docker-950000 in cluster offline-docker-950000
	I0823 11:57:20.725909    3828 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:57:20.725930    3828 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 11:57:20.725937    3828 cache.go:57] Caching tarball of preloaded images
	I0823 11:57:20.726000    3828 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 11:57:20.726005    3828 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 11:57:20.726059    3828 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/offline-docker-950000/config.json ...
	I0823 11:57:20.726071    3828 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/offline-docker-950000/config.json: {Name:mkcb8a0a2e9b3c572d533a5d1f3d1340be8ba43f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:57:20.726260    3828 start.go:365] acquiring machines lock for offline-docker-950000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:57:20.726286    3828 start.go:369] acquired machines lock for "offline-docker-950000" in 21.208µs
	I0823 11:57:20.726296    3828 start.go:93] Provisioning new machine with config: &{Name:offline-docker-950000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.0 ClusterName:offline-docker-950000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:57:20.726330    3828 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:57:20.730910    3828 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0823 11:57:20.745833    3828 start.go:159] libmachine.API.Create for "offline-docker-950000" (driver="qemu2")
	I0823 11:57:20.745858    3828 client.go:168] LocalClient.Create starting
	I0823 11:57:20.745953    3828 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:57:20.745979    3828 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:20.745990    3828 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:20.746034    3828 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:57:20.746053    3828 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:20.746062    3828 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:20.746382    3828 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:57:20.922520    3828 main.go:141] libmachine: Creating SSH key...
	I0823 11:57:21.112887    3828 main.go:141] libmachine: Creating Disk image...
	I0823 11:57:21.112896    3828 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:57:21.113044    3828 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/disk.qcow2
	I0823 11:57:21.126204    3828 main.go:141] libmachine: STDOUT: 
	I0823 11:57:21.126224    3828 main.go:141] libmachine: STDERR: 
	I0823 11:57:21.126293    3828 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/disk.qcow2 +20000M
	I0823 11:57:21.134183    3828 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:57:21.134197    3828 main.go:141] libmachine: STDERR: 
	I0823 11:57:21.134230    3828 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/disk.qcow2
	I0823 11:57:21.134239    3828 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:57:21.134281    3828 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:79:20:ff:9a:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/disk.qcow2
	I0823 11:57:21.135988    3828 main.go:141] libmachine: STDOUT: 
	I0823 11:57:21.136001    3828 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:57:21.136025    3828 client.go:171] LocalClient.Create took 390.173167ms
	I0823 11:57:23.138027    3828 start.go:128] duration metric: createHost completed in 2.411767s
	I0823 11:57:23.138053    3828 start.go:83] releasing machines lock for "offline-docker-950000", held for 2.411844375s
	W0823 11:57:23.138074    3828 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:57:23.145213    3828 out.go:177] * Deleting "offline-docker-950000" in qemu2 ...
	W0823 11:57:23.153792    3828 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:57:23.153804    3828 start.go:687] Will try again in 5 seconds ...
	I0823 11:57:28.155887    3828 start.go:365] acquiring machines lock for offline-docker-950000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:57:28.156350    3828 start.go:369] acquired machines lock for "offline-docker-950000" in 348.125µs
	I0823 11:57:28.156469    3828 start.go:93] Provisioning new machine with config: &{Name:offline-docker-950000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.0 ClusterName:offline-docker-950000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:57:28.156770    3828 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:57:28.166400    3828 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0823 11:57:28.209440    3828 start.go:159] libmachine.API.Create for "offline-docker-950000" (driver="qemu2")
	I0823 11:57:28.209481    3828 client.go:168] LocalClient.Create starting
	I0823 11:57:28.209587    3828 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:57:28.209645    3828 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:28.209667    3828 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:28.209734    3828 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:57:28.209769    3828 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:28.209785    3828 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:28.210262    3828 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:57:28.340778    3828 main.go:141] libmachine: Creating SSH key...
	I0823 11:57:28.548498    3828 main.go:141] libmachine: Creating Disk image...
	I0823 11:57:28.548508    3828 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:57:28.548658    3828 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/disk.qcow2
	I0823 11:57:28.557264    3828 main.go:141] libmachine: STDOUT: 
	I0823 11:57:28.557279    3828 main.go:141] libmachine: STDERR: 
	I0823 11:57:28.557335    3828 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/disk.qcow2 +20000M
	I0823 11:57:28.564507    3828 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:57:28.564519    3828 main.go:141] libmachine: STDERR: 
	I0823 11:57:28.564530    3828 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/disk.qcow2
	I0823 11:57:28.564536    3828 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:57:28.564585    3828 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:4e:f1:d5:ee:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/offline-docker-950000/disk.qcow2
	I0823 11:57:28.566103    3828 main.go:141] libmachine: STDOUT: 
	I0823 11:57:28.566116    3828 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:57:28.566127    3828 client.go:171] LocalClient.Create took 356.649084ms
	I0823 11:57:30.568125    3828 start.go:128] duration metric: createHost completed in 2.411420791s
	I0823 11:57:30.568149    3828 start.go:83] releasing machines lock for "offline-docker-950000", held for 2.411866333s
	W0823 11:57:30.568243    3828 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-950000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-950000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:57:30.577461    3828 out.go:177] 
	W0823 11:57:30.582497    3828 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 11:57:30.582504    3828 out.go:239] * 
	* 
	W0823 11:57:30.583016    3828 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 11:57:30.593468    3828 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-950000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-08-23 11:57:30.602834 -0700 PDT m=+2768.854432751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-950000 -n offline-docker-950000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-950000 -n offline-docker-950000: exit status 7 (32.450667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-950000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-950000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-950000
--- FAIL: TestOffline (10.13s)

                                                
                                    
x
+
TestAddons/parallel/Registry (720.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:304: failed waiting for registry replicacontroller to stabilize: timed out waiting for the condition
addons_test.go:306: registry stabilized in 6m0.001428208s
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
addons_test.go:308: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-603000 -n addons-603000
addons_test.go:308: TestAddons/parallel/Registry: showing logs for failed pods as of 2023-08-23 11:30:43.438552 -0700 PDT m=+1161.579181042
addons_test.go:309: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-603000 -n addons-603000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-603000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:11 PDT |                     |
	|         | -p download-only-861000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:11 PDT |                     |
	|         | -p download-only-861000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| delete  | -p download-only-861000        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| delete  | -p download-only-861000        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| start   | --download-only -p             | binary-mirror-536000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT |                     |
	|         | binary-mirror-536000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49317         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-536000        | binary-mirror-536000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| start   | -p addons-603000               | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:18 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:30 PDT |                     |
	|         | addons-603000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/23 11:12:01
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0823 11:12:01.473993    1632 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:12:01.474116    1632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:12:01.474118    1632 out.go:309] Setting ErrFile to fd 2...
	I0823 11:12:01.474121    1632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:12:01.474234    1632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:12:01.475282    1632 out.go:303] Setting JSON to false
	I0823 11:12:01.490493    1632 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":694,"bootTime":1692813627,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:12:01.490568    1632 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:12:01.495699    1632 out.go:177] * [addons-603000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:12:01.502675    1632 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:12:01.502715    1632 notify.go:220] Checking for updates...
	I0823 11:12:01.509745    1632 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:12:01.512689    1632 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:12:01.515810    1632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:12:01.518747    1632 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:12:01.521752    1632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:12:01.524877    1632 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:12:01.528782    1632 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 11:12:01.535731    1632 start.go:298] selected driver: qemu2
	I0823 11:12:01.535736    1632 start.go:902] validating driver "qemu2" against <nil>
	I0823 11:12:01.535742    1632 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:12:01.537642    1632 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 11:12:01.540695    1632 out.go:177] * Automatically selected the socket_vmnet network
	I0823 11:12:01.543724    1632 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 11:12:01.543758    1632 cni.go:84] Creating CNI manager for ""
	I0823 11:12:01.543773    1632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:12:01.543776    1632 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 11:12:01.543783    1632 start_flags.go:319] config:
	{Name:addons-603000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0}
	I0823 11:12:01.547827    1632 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:12:01.555769    1632 out.go:177] * Starting control plane node addons-603000 in cluster addons-603000
	I0823 11:12:01.559683    1632 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:12:01.559703    1632 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 11:12:01.559720    1632 cache.go:57] Caching tarball of preloaded images
	I0823 11:12:01.559801    1632 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 11:12:01.559806    1632 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 11:12:01.560010    1632 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/config.json ...
	I0823 11:12:01.560023    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/config.json: {Name:mkc2981401a387d0fc588c73edb3c2135fa892ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:01.560240    1632 start.go:365] acquiring machines lock for addons-603000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:12:01.560349    1632 start.go:369] acquired machines lock for "addons-603000" in 102.292µs
	I0823 11:12:01.560360    1632 start.go:93] Provisioning new machine with config: &{Name:addons-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.28.0 ClusterName:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:12:01.560401    1632 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:12:01.568742    1632 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0823 11:12:01.911891    1632 start.go:159] libmachine.API.Create for "addons-603000" (driver="qemu2")
	I0823 11:12:01.911940    1632 client.go:168] LocalClient.Create starting
	I0823 11:12:01.912112    1632 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:12:02.016848    1632 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:12:02.196172    1632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:12:02.677712    1632 main.go:141] libmachine: Creating SSH key...
	I0823 11:12:02.710073    1632 main.go:141] libmachine: Creating Disk image...
	I0823 11:12:02.710078    1632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:12:02.710280    1632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2
	I0823 11:12:02.789119    1632 main.go:141] libmachine: STDOUT: 
	I0823 11:12:02.789143    1632 main.go:141] libmachine: STDERR: 
	I0823 11:12:02.789223    1632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2 +20000M
	I0823 11:12:02.796505    1632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:12:02.796516    1632 main.go:141] libmachine: STDERR: 
	I0823 11:12:02.796535    1632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2
	I0823 11:12:02.796546    1632 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:12:02.796585    1632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:3b:71:83:24:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2
	I0823 11:12:02.868731    1632 main.go:141] libmachine: STDOUT: 
	I0823 11:12:02.868763    1632 main.go:141] libmachine: STDERR: 
	I0823 11:12:02.868768    1632 main.go:141] libmachine: Attempt 0
	I0823 11:12:02.868785    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:04.870905    1632 main.go:141] libmachine: Attempt 1
	I0823 11:12:04.871005    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:06.873192    1632 main.go:141] libmachine: Attempt 2
	I0823 11:12:06.873220    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:08.875257    1632 main.go:141] libmachine: Attempt 3
	I0823 11:12:08.875269    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:10.877286    1632 main.go:141] libmachine: Attempt 4
	I0823 11:12:10.877310    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:12.879362    1632 main.go:141] libmachine: Attempt 5
	I0823 11:12:12.879382    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:14.881424    1632 main.go:141] libmachine: Attempt 6
	I0823 11:12:14.881454    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:14.881611    1632 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0823 11:12:14.881699    1632 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e79d7d}
	I0823 11:12:14.881706    1632 main.go:141] libmachine: Found match: da:3b:71:83:24:7a
	I0823 11:12:14.881720    1632 main.go:141] libmachine: IP: 192.168.105.2
	I0823 11:12:14.881729    1632 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0823 11:12:15.887997    1632 machine.go:88] provisioning docker machine ...
	I0823 11:12:15.888014    1632 buildroot.go:166] provisioning hostname "addons-603000"
	I0823 11:12:15.888459    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:15.888726    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:15.888732    1632 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-603000 && echo "addons-603000" | sudo tee /etc/hostname
	I0823 11:12:15.942083    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-603000
	
	I0823 11:12:15.942154    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:15.942421    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:15.942431    1632 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-603000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-603000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-603000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0823 11:12:15.994730    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0823 11:12:15.994741    1632 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17086-1133/.minikube CaCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17086-1133/.minikube}
	I0823 11:12:15.994754    1632 buildroot.go:174] setting up certificates
	I0823 11:12:15.994761    1632 provision.go:83] configureAuth start
	I0823 11:12:15.994765    1632 provision.go:138] copyHostCerts
	I0823 11:12:15.994890    1632 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.pem (1078 bytes)
	I0823 11:12:15.995099    1632 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/cert.pem (1123 bytes)
	I0823 11:12:15.995204    1632 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/key.pem (1679 bytes)
	I0823 11:12:15.995273    1632 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem org=jenkins.addons-603000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-603000]
	I0823 11:12:16.087853    1632 provision.go:172] copyRemoteCerts
	I0823 11:12:16.087919    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0823 11:12:16.087927    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.117611    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0823 11:12:16.124255    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0823 11:12:16.130804    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0823 11:12:16.138250    1632 provision.go:86] duration metric: configureAuth took 143.474625ms
	I0823 11:12:16.138257    1632 buildroot.go:189] setting minikube options for container-runtime
	I0823 11:12:16.138365    1632 config.go:182] Loaded profile config "addons-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:12:16.138406    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.138621    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.138626    1632 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0823 11:12:16.188528    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0823 11:12:16.188533    1632 buildroot.go:70] root file system type: tmpfs
	I0823 11:12:16.188589    1632 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0823 11:12:16.188629    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.188853    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.188886    1632 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0823 11:12:16.244639    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0823 11:12:16.244690    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.244933    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.244942    1632 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0823 11:12:16.607992    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0823 11:12:16.608005    1632 machine.go:91] provisioned docker machine in 720.015167ms
	I0823 11:12:16.608010    1632 client.go:171] LocalClient.Create took 14.696413292s
	I0823 11:12:16.608025    1632 start.go:167] duration metric: libmachine.API.Create for "addons-603000" took 14.696491208s
	I0823 11:12:16.608031    1632 start.go:300] post-start starting for "addons-603000" (driver="qemu2")
	I0823 11:12:16.608036    1632 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0823 11:12:16.608110    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0823 11:12:16.608120    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.635532    1632 ssh_runner.go:195] Run: cat /etc/os-release
	I0823 11:12:16.636917    1632 info.go:137] Remote host: Buildroot 2021.02.12
	I0823 11:12:16.636928    1632 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/addons for local assets ...
	I0823 11:12:16.636998    1632 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/files for local assets ...
	I0823 11:12:16.637031    1632 start.go:303] post-start completed in 28.997291ms
	I0823 11:12:16.637396    1632 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/config.json ...
	I0823 11:12:16.637545    1632 start.go:128] duration metric: createHost completed in 15.077498208s
	I0823 11:12:16.637587    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.637809    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.637813    1632 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0823 11:12:16.687915    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692814336.418860543
	
	I0823 11:12:16.687923    1632 fix.go:206] guest clock: 1692814336.418860543
	I0823 11:12:16.687928    1632 fix.go:219] Guest: 2023-08-23 11:12:16.418860543 -0700 PDT Remote: 2023-08-23 11:12:16.637549 -0700 PDT m=+15.183153626 (delta=-218.688457ms)
	I0823 11:12:16.687939    1632 fix.go:190] guest clock delta is within tolerance: -218.688457ms
	I0823 11:12:16.687945    1632 start.go:83] releasing machines lock for "addons-603000", held for 15.127950375s
	I0823 11:12:16.688248    1632 ssh_runner.go:195] Run: cat /version.json
	I0823 11:12:16.688258    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.688283    1632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0823 11:12:16.688315    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.830956    1632 ssh_runner.go:195] Run: systemctl --version
	I0823 11:12:16.833845    1632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0823 11:12:16.836334    1632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0823 11:12:16.836377    1632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0823 11:12:16.842975    1632 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0823 11:12:16.842984    1632 start.go:466] detecting cgroup driver to use...
	I0823 11:12:16.843114    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:12:16.850298    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0823 11:12:16.854361    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0823 11:12:16.858207    1632 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0823 11:12:16.858231    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0823 11:12:16.861904    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:12:16.865421    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0823 11:12:16.868360    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:12:16.871352    1632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0823 11:12:16.874459    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0823 11:12:16.878023    1632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0823 11:12:16.881145    1632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0823 11:12:16.883828    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:16.960568    1632 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0823 11:12:16.969157    1632 start.go:466] detecting cgroup driver to use...
	I0823 11:12:16.969235    1632 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0823 11:12:16.974606    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:12:16.980054    1632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0823 11:12:16.985654    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:12:16.990412    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 11:12:16.995694    1632 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0823 11:12:17.032758    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 11:12:17.037631    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:12:17.043070    1632 ssh_runner.go:195] Run: which cri-dockerd
	I0823 11:12:17.044366    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0823 11:12:17.046802    1632 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0823 11:12:17.051842    1632 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0823 11:12:17.122781    1632 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0823 11:12:17.203234    1632 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0823 11:12:17.203250    1632 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0823 11:12:17.208465    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:17.289831    1632 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0823 11:12:18.445051    1632 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.15523225s)
	I0823 11:12:18.445131    1632 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0823 11:12:18.516370    1632 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0823 11:12:18.597904    1632 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0823 11:12:18.678212    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:18.761579    1632 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0823 11:12:18.769154    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:18.854594    1632 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0823 11:12:18.877760    1632 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0823 11:12:18.877836    1632 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0823 11:12:18.879918    1632 start.go:534] Will wait 60s for crictl version
	I0823 11:12:18.879971    1632 ssh_runner.go:195] Run: which crictl
	I0823 11:12:18.881335    1632 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0823 11:12:18.895610    1632 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0823 11:12:18.895681    1632 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:12:18.905007    1632 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:12:18.918762    1632 out.go:204] * Preparing Kubernetes v1.28.0 on Docker 24.0.4 ...
	I0823 11:12:18.918902    1632 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0823 11:12:18.920430    1632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 11:12:18.924018    1632 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:12:18.924057    1632 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:12:18.929191    1632 docker.go:636] Got preloaded images: 
	I0823 11:12:18.929199    1632 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.0 wasn't preloaded
	I0823 11:12:18.929238    1632 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0823 11:12:18.932021    1632 ssh_runner.go:195] Run: which lz4
	I0823 11:12:18.933304    1632 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0823 11:12:18.934619    1632 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0823 11:12:18.934635    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356920680 bytes)
	I0823 11:12:20.248127    1632 docker.go:600] Took 1.314887 seconds to copy over tarball
	I0823 11:12:20.248187    1632 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0823 11:12:21.298381    1632 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.050206542s)
	I0823 11:12:21.298396    1632 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0823 11:12:21.314358    1632 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0823 11:12:21.317830    1632 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0823 11:12:21.322906    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:21.401236    1632 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0823 11:12:23.532716    1632 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.131514959s)
	I0823 11:12:23.532818    1632 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:12:23.539039    1632 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0823 11:12:23.539047    1632 cache_images.go:84] Images are preloaded, skipping loading
	I0823 11:12:23.539094    1632 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0823 11:12:23.547073    1632 cni.go:84] Creating CNI manager for ""
	I0823 11:12:23.547082    1632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:12:23.547113    1632 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0823 11:12:23.547124    1632 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-603000 NodeName:addons-603000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0823 11:12:23.547195    1632 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-603000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0823 11:12:23.547230    1632 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-603000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0823 11:12:23.547289    1632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0823 11:12:23.550418    1632 binaries.go:44] Found k8s binaries, skipping transfer
	I0823 11:12:23.550450    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0823 11:12:23.553530    1632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0823 11:12:23.558535    1632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0823 11:12:23.563462    1632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0823 11:12:23.568191    1632 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0823 11:12:23.569452    1632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 11:12:23.573456    1632 certs.go:56] Setting up /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000 for IP: 192.168.105.2
	I0823 11:12:23.573466    1632 certs.go:190] acquiring lock for shared ca certs: {Name:mk376b7368cd0c31aa9f4550a2157cb5219e493f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.573659    1632 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key
	I0823 11:12:23.735121    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt ...
	I0823 11:12:23.735127    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt: {Name:mk63b4dc9bf64742b3a2443ec5bb13fecbad997f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.735357    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key ...
	I0823 11:12:23.735361    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key: {Name:mk69cc2166a61f2546ed0135cfb5fd8f801bd14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.735483    1632 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key
	I0823 11:12:23.796242    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt ...
	I0823 11:12:23.796246    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt: {Name:mk419a1ff00ed72437ca83474e3a88fdfaeeeee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.796377    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key ...
	I0823 11:12:23.796380    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key: {Name:mk4c7f9a48478507429af69af483ca06fbc978e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.796514    1632 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.key
	I0823 11:12:23.796523    1632 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt with IP's: []
	I0823 11:12:24.000999    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt ...
	I0823 11:12:24.001008    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: {Name:mkf571a66a11fe461d7b44eaf6283e5341b1a7f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.001274    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.key ...
	I0823 11:12:24.001277    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.key: {Name:mkcf4093aeeeecec47bfe93d95e186f81025a994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.001388    1632 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969
	I0823 11:12:24.001398    1632 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0823 11:12:24.112607    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969 ...
	I0823 11:12:24.112611    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969: {Name:mkc8690ae4b550cd6392c142a6c0c10c4e6a5520 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.112766    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969 ...
	I0823 11:12:24.112769    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969: {Name:mkce56fce274146ffaf24072ae4ff4b5e0acf555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.112886    1632 certs.go:337] copying /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt
	I0823 11:12:24.113091    1632 certs.go:341] copying /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key
	I0823 11:12:24.113203    1632 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key
	I0823 11:12:24.113221    1632 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt with IP's: []
	I0823 11:12:24.164570    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt ...
	I0823 11:12:24.164574    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt: {Name:mk78bcc5a9a59a81499b830c0df5af65d761e828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.164724    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key ...
	I0823 11:12:24.164727    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key: {Name:mk7a6ea56c8d07401b6d3adda9d302f5989259f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.165026    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem (1675 bytes)
	I0823 11:12:24.165052    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem (1078 bytes)
	I0823 11:12:24.165076    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem (1123 bytes)
	I0823 11:12:24.165112    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem (1679 bytes)
	I0823 11:12:24.165474    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0823 11:12:24.173558    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0823 11:12:24.180163    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0823 11:12:24.186570    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0823 11:12:24.193721    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0823 11:12:24.200859    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0823 11:12:24.207727    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0823 11:12:24.214435    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0823 11:12:24.221806    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0823 11:12:24.228902    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0823 11:12:24.234839    1632 ssh_runner.go:195] Run: openssl version
	I0823 11:12:24.236947    1632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0823 11:12:24.239793    1632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:12:24.241347    1632 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 23 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:12:24.241366    1632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:12:24.243416    1632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0823 11:12:24.246590    1632 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0823 11:12:24.247919    1632 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0823 11:12:24.247965    1632 kubeadm.go:404] StartCluster: {Name:addons-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterNam
e:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:12:24.248029    1632 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0823 11:12:24.253769    1632 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0823 11:12:24.257096    1632 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0823 11:12:24.259889    1632 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0823 11:12:24.262662    1632 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0823 11:12:24.262675    1632 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0823 11:12:24.285841    1632 kubeadm.go:322] [init] Using Kubernetes version: v1.28.0
	I0823 11:12:24.285870    1632 kubeadm.go:322] [preflight] Running pre-flight checks
	I0823 11:12:24.340213    1632 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0823 11:12:24.340268    1632 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0823 11:12:24.340328    1632 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0823 11:12:24.397608    1632 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0823 11:12:24.406812    1632 out.go:204]   - Generating certificates and keys ...
	I0823 11:12:24.406874    1632 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0823 11:12:24.406907    1632 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0823 11:12:24.448173    1632 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0823 11:12:24.501859    1632 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0823 11:12:24.630928    1632 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0823 11:12:24.705637    1632 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0823 11:12:24.853624    1632 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0823 11:12:24.853703    1632 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-603000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0823 11:12:24.984031    1632 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0823 11:12:24.984098    1632 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-603000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0823 11:12:25.055121    1632 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0823 11:12:25.098919    1632 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0823 11:12:25.161126    1632 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0823 11:12:25.161156    1632 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0823 11:12:25.251684    1632 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0823 11:12:25.312453    1632 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0823 11:12:25.380616    1632 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0823 11:12:25.465178    1632 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0823 11:12:25.465413    1632 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0823 11:12:25.466498    1632 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0823 11:12:25.474829    1632 out.go:204]   - Booting up control plane ...
	I0823 11:12:25.474890    1632 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0823 11:12:25.474941    1632 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0823 11:12:25.474973    1632 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0823 11:12:25.475045    1632 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0823 11:12:25.475095    1632 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0823 11:12:25.475114    1632 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0823 11:12:25.557775    1632 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0823 11:12:29.059855    1632 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.502356 seconds
	I0823 11:12:29.059917    1632 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0823 11:12:29.066243    1632 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0823 11:12:29.575724    1632 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0823 11:12:29.575818    1632 kubeadm.go:322] [mark-control-plane] Marking the node addons-603000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0823 11:12:30.082813    1632 kubeadm.go:322] [bootstrap-token] Using token: l2h0hz.d0iyqenbl2749eth
	I0823 11:12:30.096111    1632 out.go:204]   - Configuring RBAC rules ...
	I0823 11:12:30.096178    1632 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0823 11:12:30.096240    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0823 11:12:30.098015    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0823 11:12:30.099101    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0823 11:12:30.100289    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0823 11:12:30.101319    1632 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0823 11:12:30.105824    1632 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0823 11:12:30.274483    1632 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0823 11:12:30.492880    1632 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0823 11:12:30.493261    1632 kubeadm.go:322] 
	I0823 11:12:30.493290    1632 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0823 11:12:30.493295    1632 kubeadm.go:322] 
	I0823 11:12:30.493350    1632 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0823 11:12:30.493355    1632 kubeadm.go:322] 
	I0823 11:12:30.493371    1632 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0823 11:12:30.493409    1632 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0823 11:12:30.493436    1632 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0823 11:12:30.493440    1632 kubeadm.go:322] 
	I0823 11:12:30.493470    1632 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0823 11:12:30.493473    1632 kubeadm.go:322] 
	I0823 11:12:30.493507    1632 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0823 11:12:30.493512    1632 kubeadm.go:322] 
	I0823 11:12:30.493534    1632 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0823 11:12:30.493571    1632 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0823 11:12:30.493608    1632 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0823 11:12:30.493612    1632 kubeadm.go:322] 
	I0823 11:12:30.493652    1632 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0823 11:12:30.493686    1632 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0823 11:12:30.493689    1632 kubeadm.go:322] 
	I0823 11:12:30.493755    1632 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token l2h0hz.d0iyqenbl2749eth \
	I0823 11:12:30.493808    1632 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bfb9bdd14f4bb2775ce7c1686c5d9f5d740ad24201ba0b194f17c1c3700b6f \
	I0823 11:12:30.493820    1632 kubeadm.go:322] 	--control-plane 
	I0823 11:12:30.493824    1632 kubeadm.go:322] 
	I0823 11:12:30.493863    1632 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0823 11:12:30.493867    1632 kubeadm.go:322] 
	I0823 11:12:30.493911    1632 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token l2h0hz.d0iyqenbl2749eth \
	I0823 11:12:30.493962    1632 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bfb9bdd14f4bb2775ce7c1686c5d9f5d740ad24201ba0b194f17c1c3700b6f 
	I0823 11:12:30.494020    1632 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0823 11:12:30.494027    1632 cni.go:84] Creating CNI manager for ""
	I0823 11:12:30.494035    1632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:12:30.502806    1632 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0823 11:12:30.507845    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0823 11:12:30.511861    1632 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0823 11:12:30.516952    1632 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0823 11:12:30.517017    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:30.517023    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=b0cad40b2c48c073112e0845fd1037a030ab0128 minikube.k8s.io/name=addons-603000 minikube.k8s.io/updated_at=2023_08_23T11_12_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:30.588330    1632 ops.go:34] apiserver oom_adj: -16
	I0823 11:12:30.588368    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:30.620791    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:31.158369    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:31.658375    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:32.158344    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:32.658320    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:33.158345    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:33.658343    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:34.158277    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:34.658250    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:35.158228    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:35.658245    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:36.158202    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:36.658170    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:37.156385    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:37.656540    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:38.158165    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:38.658197    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:39.158131    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:39.658110    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:40.158144    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:40.658128    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:41.158110    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:41.658102    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:42.158090    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:42.658123    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:43.158020    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:43.214120    1632 kubeadm.go:1081] duration metric: took 12.6974425s to wait for elevateKubeSystemPrivileges.
	I0823 11:12:43.214131    1632 kubeadm.go:406] StartCluster complete in 18.96661775s
	I0823 11:12:43.214140    1632 settings.go:142] acquiring lock: {Name:mk727aa2e008442bafb3bca409aa11c730e46bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:43.214301    1632 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:12:43.214543    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/kubeconfig: {Name:mk62eedfc40b21b153318958a231856d07330e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:43.214750    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0823 11:12:43.214793    1632 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0823 11:12:43.214848    1632 addons.go:69] Setting volumesnapshots=true in profile "addons-603000"
	I0823 11:12:43.214857    1632 addons.go:231] Setting addon volumesnapshots=true in "addons-603000"
	I0823 11:12:43.214861    1632 addons.go:69] Setting ingress=true in profile "addons-603000"
	I0823 11:12:43.214897    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.214901    1632 addons.go:69] Setting default-storageclass=true in profile "addons-603000"
	I0823 11:12:43.214897    1632 addons.go:231] Setting addon ingress=true in "addons-603000"
	I0823 11:12:43.214917    1632 addons.go:69] Setting cloud-spanner=true in profile "addons-603000"
	I0823 11:12:43.214937    1632 addons.go:69] Setting gcp-auth=true in profile "addons-603000"
	I0823 11:12:43.214943    1632 mustload.go:65] Loading cluster: addons-603000
	I0823 11:12:43.214929    1632 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-603000"
	I0823 11:12:43.214951    1632 addons.go:231] Setting addon cloud-spanner=true in "addons-603000"
	I0823 11:12:43.215174    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215232    1632 addons.go:69] Setting metrics-server=true in profile "addons-603000"
	I0823 11:12:43.215240    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215246    1632 addons.go:231] Setting addon metrics-server=true in "addons-603000"
	I0823 11:12:43.215279    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215384    1632 addons.go:69] Setting registry=true in profile "addons-603000"
	I0823 11:12:43.215406    1632 addons.go:231] Setting addon registry=true in "addons-603000"
	I0823 11:12:43.215406    1632 config.go:182] Loaded profile config "addons-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:12:43.215469    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215573    1632 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-603000"
	I0823 11:12:43.215613    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215623    1632 addons.go:69] Setting ingress-dns=true in profile "addons-603000"
	I0823 11:12:43.215632    1632 addons.go:69] Setting storage-provisioner=true in profile "addons-603000"
	I0823 11:12:43.215635    1632 addons.go:231] Setting addon ingress-dns=true in "addons-603000"
	I0823 11:12:43.215647    1632 addons.go:231] Setting addon storage-provisioner=true in "addons-603000"
	I0823 11:12:43.215715    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215748    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.214927    1632 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-603000"
	I0823 11:12:43.215926    1632 addons.go:69] Setting inspektor-gadget=true in profile "addons-603000"
	I0823 11:12:43.215930    1632 addons.go:231] Setting addon inspektor-gadget=true in "addons-603000"
	I0823 11:12:43.215950    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.216258    1632 config.go:182] Loaded profile config "addons-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	W0823 11:12:43.216285    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.216296    1632 addons.go:277] "addons-603000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0823 11:12:43.216301    1632 addons.go:467] Verifying addon ingress=true in "addons-603000"
	I0823 11:12:43.220697    1632 out.go:177] * Verifying ingress addon...
	W0823 11:12:43.216646    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.216836    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.216925    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217082    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217134    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217426    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217553    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	I0823 11:12:43.218066    1632 host.go:66] Checking if "addons-603000" exists ...
	W0823 11:12:43.229685    1632 addons.go:277] "addons-603000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229693    1632 addons.go:277] "addons-603000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229707    1632 addons_storage_classes.go:55] "addons-603000" is not running, writing default-storageclass=true to disk and skipping enablement
	W0823 11:12:43.229696    1632 addons.go:277] "addons-603000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229727    1632 addons.go:277] "addons-603000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229749    1632 addons.go:277] "addons-603000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0823 11:12:43.230154    1632 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0823 11:12:43.235671    1632 addons.go:277] "addons-603000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0823 11:12:43.235676    1632 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.19.0
	I0823 11:12:43.237659    1632 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-603000" context rescaled to 1 replicas
	I0823 11:12:43.239627    1632 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:12:43.245678    1632 out.go:177] * Verifying Kubernetes components...
	I0823 11:12:43.239670    1632 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0823 11:12:43.239676    1632 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-603000"
	I0823 11:12:43.239682    1632 addons.go:231] Setting addon default-storageclass=true in "addons-603000"
	I0823 11:12:43.239679    1632 addons.go:467] Verifying addon registry=true in "addons-603000"
	I0823 11:12:43.239684    1632 addons.go:467] Verifying addon metrics-server=true in "addons-603000"
	I0823 11:12:43.241522    1632 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0823 11:12:43.257706    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.257792    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 11:12:43.259109    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0823 11:12:43.263610    1632 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0823 11:12:43.264359    1632 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0823 11:12:43.269028    1632 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0823 11:12:43.275582    1632 out.go:177] * Verifying registry addon...
	I0823 11:12:43.281668    1632 out.go:177] * Verifying csi-hostpath-driver addon...
	I0823 11:12:43.281674    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0823 11:12:43.281680    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0823 11:12:43.281692    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0823 11:12:43.288063    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0823 11:12:43.293691    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:43.293700    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:43.293699    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:43.294283    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0823 11:12:43.297149    1632 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0823 11:12:43.298922    1632 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0823 11:12:43.392760    1632 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0823 11:12:43.392773    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0823 11:12:43.396919    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0823 11:12:43.403660    1632 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0823 11:12:43.403671    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0823 11:12:43.426051    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0823 11:12:43.442459    1632 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0823 11:12:43.442469    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0823 11:12:43.465118    1632 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0823 11:12:43.465127    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0823 11:12:43.485410    1632 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0823 11:12:43.485421    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0823 11:12:43.505701    1632 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0823 11:12:43.505712    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0823 11:12:43.510666    1632 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0823 11:12:43.510672    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0823 11:12:43.522915    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0823 11:12:43.857341    1632 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0823 11:12:43.857696    1632 node_ready.go:35] waiting up to 6m0s for node "addons-603000" to be "Ready" ...
	I0823 11:12:43.859423    1632 node_ready.go:49] node "addons-603000" has status "Ready":"True"
	I0823 11:12:43.859431    1632 node_ready.go:38] duration metric: took 1.726334ms waiting for node "addons-603000" to be "Ready" ...
	I0823 11:12:43.859435    1632 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:12:43.862557    1632 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace to be "Ready" ...
	I0823 11:12:45.872773    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:47.873129    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:49.841300    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0823 11:12:49.841316    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:49.876535    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0823 11:12:49.882169    1632 addons.go:231] Setting addon gcp-auth=true in "addons-603000"
	I0823 11:12:49.882191    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:49.882982    1632 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0823 11:12:49.882989    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:49.913095    1632 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0823 11:12:49.916035    1632 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0823 11:12:49.920011    1632 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0823 11:12:49.920017    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0823 11:12:49.927312    1632 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0823 11:12:49.927320    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0823 11:12:49.934731    1632 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0823 11:12:49.934739    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0823 11:12:49.943537    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0823 11:12:50.372936    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:50.550032    1632 addons.go:467] Verifying addon gcp-auth=true in "addons-603000"
	I0823 11:12:50.554005    1632 out.go:177] * Verifying gcp-auth addon...
	I0823 11:12:50.561304    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0823 11:12:50.564772    1632 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0823 11:12:50.564779    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:50.566533    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:51.070598    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:51.570353    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:52.070168    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:52.570325    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:52.873631    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:53.069304    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:53.570391    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:54.068627    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:54.570250    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:55.068767    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:55.371099    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:55.570140    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:56.070737    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:56.570638    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:57.070296    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:57.374140    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:57.570286    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:58.070358    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:58.569814    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:59.070104    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:59.570841    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:59.872601    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:00.070509    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:00.570371    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:01.070409    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:01.569429    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:02.069671    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:02.372554    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:02.571673    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:03.069684    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:03.570027    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:04.070029    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:04.569893    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:04.872229    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:05.069764    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:05.569750    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:06.070699    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:06.569829    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:06.872245    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:07.070111    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:07.569767    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:08.069905    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:08.571644    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:09.069600    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:09.372178    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:09.569699    1632 kapi.go:107] duration metric: took 19.008846917s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0823 11:13:09.574491    1632 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-603000 cluster.
	I0823 11:13:09.578500    1632 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0823 11:13:09.582495    1632 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0823 11:13:11.372160    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:13.871903    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:15.872253    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:18.371986    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:20.372421    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:22.871118    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:23.871704    1632 pod_ready.go:92] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.871715    1632 pod_ready.go:81] duration metric: took 40.010097208s waiting for pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.871719    1632 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.872661    1632 pod_ready.go:97] error getting pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-mfwg5" not found
	I0823 11:13:23.872669    1632 pod_ready.go:81] duration metric: took 947.666µs waiting for pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace to be "Ready" ...
	E0823 11:13:23.872673    1632 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-mfwg5" not found
	I0823 11:13:23.872689    1632 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.875032    1632 pod_ready.go:92] pod "etcd-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.875037    1632 pod_ready.go:81] duration metric: took 2.343625ms waiting for pod "etcd-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.875040    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.877331    1632 pod_ready.go:92] pod "kube-apiserver-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.877336    1632 pod_ready.go:81] duration metric: took 2.293291ms waiting for pod "kube-apiserver-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.877340    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.879623    1632 pod_ready.go:92] pod "kube-controller-manager-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.879629    1632 pod_ready.go:81] duration metric: took 2.287416ms waiting for pod "kube-controller-manager-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.879633    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wxvd5" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.072856    1632 pod_ready.go:92] pod "kube-proxy-wxvd5" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:24.072864    1632 pod_ready.go:81] duration metric: took 193.233541ms waiting for pod "kube-proxy-wxvd5" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.072869    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.472997    1632 pod_ready.go:92] pod "kube-scheduler-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:24.473008    1632 pod_ready.go:81] duration metric: took 400.145667ms waiting for pod "kube-scheduler-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.473012    1632 pod_ready.go:38] duration metric: took 40.614530833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:13:24.473025    1632 api_server.go:52] waiting for apiserver process to appear ...
	I0823 11:13:24.473121    1632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 11:13:24.478605    1632 api_server.go:72] duration metric: took 41.239942375s to wait for apiserver process to appear ...
	I0823 11:13:24.478610    1632 api_server.go:88] waiting for apiserver healthz status ...
	I0823 11:13:24.478617    1632 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0823 11:13:24.482269    1632 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0823 11:13:24.482906    1632 api_server.go:141] control plane version: v1.28.0
	I0823 11:13:24.482912    1632 api_server.go:131] duration metric: took 4.29975ms to wait for apiserver health ...
	I0823 11:13:24.482914    1632 system_pods.go:43] waiting for kube-system pods to appear ...
	I0823 11:13:24.674508    1632 system_pods.go:59] 7 kube-system pods found
	I0823 11:13:24.674519    1632 system_pods.go:61] "coredns-5dd5756b68-blj5d" [1fb66ddb-fafe-4785-b040-d0fd2587b3ae] Running
	I0823 11:13:24.674521    1632 system_pods.go:61] "etcd-addons-603000" [82b2b170-9259-4355-b5e0-2cf094533743] Running
	I0823 11:13:24.674524    1632 system_pods.go:61] "kube-apiserver-addons-603000" [1f3f8b80-c0bb-4e12-80fa-d87f557ae07a] Running
	I0823 11:13:24.674526    1632 system_pods.go:61] "kube-controller-manager-addons-603000" [726ada4c-a3da-4bac-99f2-83c60531a97e] Running
	I0823 11:13:24.674531    1632 system_pods.go:61] "kube-ingress-dns-minikube" [d0fb563e-9f6f-4cbe-b232-2f4956e99b67] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0823 11:13:24.674534    1632 system_pods.go:61] "kube-proxy-wxvd5" [7632c8d9-6979-41bb-a5a1-88f356962d6b] Running
	I0823 11:13:24.674539    1632 system_pods.go:61] "kube-scheduler-addons-603000" [606497e8-7ccf-4730-91d4-a1b0bb5f1c4f] Running
	I0823 11:13:24.674542    1632 system_pods.go:74] duration metric: took 191.630375ms to wait for pod list to return data ...
	I0823 11:13:24.674546    1632 default_sa.go:34] waiting for default service account to be created ...
	I0823 11:13:24.872182    1632 default_sa.go:45] found service account: "default"
	I0823 11:13:24.872197    1632 default_sa.go:55] duration metric: took 197.644583ms for default service account to be created ...
	I0823 11:13:24.872203    1632 system_pods.go:116] waiting for k8s-apps to be running ...
	I0823 11:13:25.073612    1632 system_pods.go:86] 7 kube-system pods found
	I0823 11:13:25.073624    1632 system_pods.go:89] "coredns-5dd5756b68-blj5d" [1fb66ddb-fafe-4785-b040-d0fd2587b3ae] Running
	I0823 11:13:25.073627    1632 system_pods.go:89] "etcd-addons-603000" [82b2b170-9259-4355-b5e0-2cf094533743] Running
	I0823 11:13:25.073629    1632 system_pods.go:89] "kube-apiserver-addons-603000" [1f3f8b80-c0bb-4e12-80fa-d87f557ae07a] Running
	I0823 11:13:25.073632    1632 system_pods.go:89] "kube-controller-manager-addons-603000" [726ada4c-a3da-4bac-99f2-83c60531a97e] Running
	I0823 11:13:25.073636    1632 system_pods.go:89] "kube-ingress-dns-minikube" [d0fb563e-9f6f-4cbe-b232-2f4956e99b67] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0823 11:13:25.073641    1632 system_pods.go:89] "kube-proxy-wxvd5" [7632c8d9-6979-41bb-a5a1-88f356962d6b] Running
	I0823 11:13:25.073644    1632 system_pods.go:89] "kube-scheduler-addons-603000" [606497e8-7ccf-4730-91d4-a1b0bb5f1c4f] Running
	I0823 11:13:25.073647    1632 system_pods.go:126] duration metric: took 201.446458ms to wait for k8s-apps to be running ...
	I0823 11:13:25.073650    1632 system_svc.go:44] waiting for kubelet service to be running ....
	I0823 11:13:25.073715    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 11:13:25.078691    1632 system_svc.go:56] duration metric: took 5.037209ms WaitForService to wait for kubelet.
	I0823 11:13:25.078704    1632 kubeadm.go:581] duration metric: took 41.84005475s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0823 11:13:25.078718    1632 node_conditions.go:102] verifying NodePressure condition ...
	I0823 11:13:25.271033    1632 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0823 11:13:25.271069    1632 node_conditions.go:123] node cpu capacity is 2
	I0823 11:13:25.271075    1632 node_conditions.go:105] duration metric: took 192.359208ms to run NodePressure ...
	I0823 11:13:25.271080    1632 start.go:228] waiting for startup goroutines ...
	I0823 11:18:43.244336    1632 kapi.go:107] duration metric: took 6m0.011535667s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0823 11:18:43.244405    1632 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0823 11:18:43.298298    1632 kapi.go:107] duration metric: took 6m0.001378042s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0823 11:18:43.298326    1632 kapi.go:107] duration metric: took 6m0.00762275s to wait for kubernetes.io/minikube-addons=registry ...
	W0823 11:18:43.298353    1632 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	W0823 11:18:43.298328    1632 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0823 11:18:43.304474    1632 out.go:177] * Enabled addons: volumesnapshots, cloud-spanner, storage-provisioner, metrics-server, ingress-dns, default-storageclass, inspektor-gadget, gcp-auth
	I0823 11:18:43.311559    1632 addons.go:502] enable addons completed in 6m0.094130167s: enabled=[volumesnapshots cloud-spanner storage-provisioner metrics-server ingress-dns default-storageclass inspektor-gadget gcp-auth]
	I0823 11:18:43.311576    1632 start.go:233] waiting for cluster config update ...
	I0823 11:18:43.311585    1632 start.go:242] writing updated cluster config ...
	I0823 11:18:43.312012    1632 ssh_runner.go:195] Run: rm -f paused
	I0823 11:18:43.342628    1632 start.go:600] kubectl: 1.27.2, cluster: 1.28.0 (minor skew: 1)
	I0823 11:18:43.345571    1632 out.go:177] * Done! kubectl is now configured to use "addons-603000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-08-23 18:12:13 UTC, ends at Wed 2023-08-23 18:30:43 UTC. --
	Aug 23 18:15:51 addons-603000 dockerd[1103]: time="2023-08-23T18:15:51.151599505Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:18:33 addons-603000 dockerd[1103]: time="2023-08-23T18:18:33.092897258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:18:33 addons-603000 dockerd[1103]: time="2023-08-23T18:18:33.092929425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:18:33 addons-603000 dockerd[1103]: time="2023-08-23T18:18:33.092939758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:18:33 addons-603000 dockerd[1103]: time="2023-08-23T18:18:33.092945842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:18:33 addons-603000 dockerd[1097]: time="2023-08-23T18:18:33.132845315Z" level=info msg="ignoring event" container=89be7df8f5d7dcdc7a0c87df485a2c7ea9b142c7e85eae340fd554ec50593049 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:18:33 addons-603000 dockerd[1103]: time="2023-08-23T18:18:33.133064818Z" level=info msg="shim disconnected" id=89be7df8f5d7dcdc7a0c87df485a2c7ea9b142c7e85eae340fd554ec50593049 namespace=moby
	Aug 23 18:18:33 addons-603000 dockerd[1103]: time="2023-08-23T18:18:33.133094193Z" level=warning msg="cleaning up after shim disconnected" id=89be7df8f5d7dcdc7a0c87df485a2c7ea9b142c7e85eae340fd554ec50593049 namespace=moby
	Aug 23 18:18:33 addons-603000 dockerd[1103]: time="2023-08-23T18:18:33.133098443Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:23:34 addons-603000 dockerd[1103]: time="2023-08-23T18:23:34.114502177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:23:34 addons-603000 dockerd[1103]: time="2023-08-23T18:23:34.114531260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:23:34 addons-603000 dockerd[1103]: time="2023-08-23T18:23:34.114537177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:23:34 addons-603000 dockerd[1103]: time="2023-08-23T18:23:34.114541260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:23:34 addons-603000 dockerd[1097]: time="2023-08-23T18:23:34.187824843Z" level=info msg="ignoring event" container=33b6b60ed9e557546bf1703af8eea1e09157e1e00441ca9a3dc016e3c9cd76e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:23:34 addons-603000 dockerd[1103]: time="2023-08-23T18:23:34.187820301Z" level=info msg="shim disconnected" id=33b6b60ed9e557546bf1703af8eea1e09157e1e00441ca9a3dc016e3c9cd76e5 namespace=moby
	Aug 23 18:23:34 addons-603000 dockerd[1103]: time="2023-08-23T18:23:34.188065590Z" level=warning msg="cleaning up after shim disconnected" id=33b6b60ed9e557546bf1703af8eea1e09157e1e00441ca9a3dc016e3c9cd76e5 namespace=moby
	Aug 23 18:23:34 addons-603000 dockerd[1103]: time="2023-08-23T18:23:34.188086340Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:28:41 addons-603000 dockerd[1103]: time="2023-08-23T18:28:41.094207801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:28:41 addons-603000 dockerd[1103]: time="2023-08-23T18:28:41.094239634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:28:41 addons-603000 dockerd[1103]: time="2023-08-23T18:28:41.094428221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:28:41 addons-603000 dockerd[1103]: time="2023-08-23T18:28:41.094440262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:28:41 addons-603000 dockerd[1097]: time="2023-08-23T18:28:41.143468399Z" level=info msg="ignoring event" container=0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:28:41 addons-603000 dockerd[1103]: time="2023-08-23T18:28:41.143514441Z" level=info msg="shim disconnected" id=0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98 namespace=moby
	Aug 23 18:28:41 addons-603000 dockerd[1103]: time="2023-08-23T18:28:41.143542817Z" level=warning msg="cleaning up after shim disconnected" id=0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98 namespace=moby
	Aug 23 18:28:41 addons-603000 dockerd[1103]: time="2023-08-23T18:28:41.143546942Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID
	0f90525fcf0bb       1499ed4fbd0aa                                                                                                       2 minutes ago       Exited              minikube-ingress-dns      8                   ee55b672623b1
	e852eb8754f9d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf        17 minutes ago      Running             gcp-auth                  0                   b3f97bd7715ac
	db988e4e18bfe       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:900ee9d94e12c15caebf0f198a51b532e6e2e373f6adfdef6fc78d11f6ecc6ac   17 minutes ago      Running             gadget                    0                   12623df028552
	20ee50fe57df9       940f54a5bcae9                                                                                                       17 minutes ago      Running             kube-proxy                0                   046b3dd276694
	eb22253f397d0       97e04611ad434                                                                                                       18 minutes ago      Running             coredns                   0                   83362738c741a
	695ccc939e29c       762dce4090c5f                                                                                                       18 minutes ago      Running             kube-scheduler            0                   cd7498705ccfc
	0d85c39c58b71       9cdd6470f48c8                                                                                                       18 minutes ago      Running             etcd                      0                   52014f9a89d58
	9700e59743269       46cc66ccc7c19                                                                                                       18 minutes ago      Running             kube-controller-manager   0                   0e19c8d65b488
	594c65a938703       00543d2fe5d71                                                                                                       18 minutes ago      Running             kube-apiserver            0                   e5952ff9dbb6d
	
	* 
	* ==> coredns [eb22253f397d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:48978 - 47852 "HINFO IN 1860146809240780526.8201528298968188454. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004478075s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-603000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-603000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b0cad40b2c48c073112e0845fd1037a030ab0128
	                    minikube.k8s.io/name=addons-603000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_23T11_12_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 23 Aug 2023 18:12:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-603000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 23 Aug 2023 18:30:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 23 Aug 2023 18:28:49 +0000   Wed, 23 Aug 2023 18:12:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 23 Aug 2023 18:28:49 +0000   Wed, 23 Aug 2023 18:12:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 23 Aug 2023 18:28:49 +0000   Wed, 23 Aug 2023 18:12:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 23 Aug 2023 18:28:49 +0000   Wed, 23 Aug 2023 18:12:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-603000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 d563976ae62b43f8a1fb53a177d29cdd
	  System UUID:                d563976ae62b43f8a1fb53a177d29cdd
	  Boot ID:                    95f14d55-f8c9-49e4-bb2e-505c410326e7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-ndv9t                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  gcp-auth                    gcp-auth-d4c87556c-k5hw7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-5dd5756b68-blj5d                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     18m
	  kube-system                 etcd-addons-603000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-addons-603000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-addons-603000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-ingress-dns-minikube                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-wxvd5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-addons-603000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node addons-603000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node addons-603000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node addons-603000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m   kubelet          Node addons-603000 status is now: NodeReady
	  Normal  RegisteredNode           18m   node-controller  Node addons-603000 event: Registered Node addons-603000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.645801] EINJ: EINJ table not found.
	[  +0.508913] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043174] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000868] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.060149] systemd-fstab-generator[476]: Ignoring "noauto" for root device
	[  +0.082417] systemd-fstab-generator[487]: Ignoring "noauto" for root device
	[  +0.495013] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.161467] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +0.082809] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[  +0.085797] systemd-fstab-generator[726]: Ignoring "noauto" for root device
	[  +1.141175] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.084612] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +0.080569] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +0.082356] systemd-fstab-generator[935]: Ignoring "noauto" for root device
	[  +0.081344] systemd-fstab-generator[946]: Ignoring "noauto" for root device
	[  +0.093418] systemd-fstab-generator[988]: Ignoring "noauto" for root device
	[  +2.547960] systemd-fstab-generator[1090]: Ignoring "noauto" for root device
	[  +2.110352] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.040486] systemd-fstab-generator[1407]: Ignoring "noauto" for root device
	[  +4.639082] systemd-fstab-generator[2281]: Ignoring "noauto" for root device
	[ +13.918576] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.422998] kauditd_printk_skb: 19 callbacks suppressed
	[  +9.996433] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Aug23 18:13] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.103613] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [0d85c39c58b7] <==
	* {"level":"info","ts":"2023-08-23T18:12:26.543189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-08-23T18:12:26.543213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-08-23T18:12:26.54323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-08-23T18:12:26.543269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-08-23T18:12:26.550684Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:12:26.550945Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-603000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-23T18:12:26.551005Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-23T18:12:26.551568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-08-23T18:12:26.555059Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:12:26.555101Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:12:26.555137Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:12:26.551069Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-23T18:12:26.555541Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-23T18:12:26.563277Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-23T18:12:26.563307Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-08-23T18:12:48.393724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.007861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-blj5d\" ","response":"range_response_count:1 size:4755"}
	{"level":"info","ts":"2023-08-23T18:12:48.39376Z","caller":"traceutil/trace.go:171","msg":"trace[2046528636] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-blj5d; range_end:; response_count:1; response_revision:423; }","duration":"169.059532ms","start":"2023-08-23T18:12:48.224693Z","end":"2023-08-23T18:12:48.393753Z","steps":["trace[2046528636] 'range keys from in-memory index tree'  (duration: 168.95968ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-23T18:12:48.39392Z","caller":"traceutil/trace.go:171","msg":"trace[720409665] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"100.636151ms","start":"2023-08-23T18:12:48.29328Z","end":"2023-08-23T18:12:48.393917Z","steps":["trace[720409665] 'process raft request'  (duration: 77.612181ms)","trace[720409665] 'compare'  (duration: 22.773829ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-23T18:12:50.777156Z","caller":"traceutil/trace.go:171","msg":"trace[471484672] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"104.054592ms","start":"2023-08-23T18:12:50.673089Z","end":"2023-08-23T18:12:50.777143Z","steps":["trace[471484672] 'process raft request'  (duration: 103.971962ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-23T18:22:26.650207Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":655}
	{"level":"info","ts":"2023-08-23T18:22:26.650931Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":655,"took":"518.817µs","hash":1262660470}
	{"level":"info","ts":"2023-08-23T18:22:26.650949Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1262660470,"revision":655,"compact-revision":-1}
	{"level":"info","ts":"2023-08-23T18:27:26.652533Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":748}
	{"level":"info","ts":"2023-08-23T18:27:26.652937Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":748,"took":"267.17µs","hash":3279716501}
	{"level":"info","ts":"2023-08-23T18:27:26.652952Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3279716501,"revision":748,"compact-revision":655}
	
	* 
	* ==> gcp-auth [e852eb8754f9] <==
	* 2023/08/23 18:13:08 GCP Auth Webhook started!
	2023/08/23 18:13:18 failed to get releases file: Get "https://storage.googleapis.com/minikube-gcp-auth/releases.json": dial tcp: lookup storage.googleapis.com on 10.96.0.10:53: read udp 10.244.0.5:60048->10.96.0.10:53: read: connection refused
	
	* 
	* ==> kernel <==
	*  18:30:43 up 18 min,  0 users,  load average: 0.24, 0.16, 0.10
	Linux addons-603000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [594c65a93870] <==
	* I0823 18:12:27.437032       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0823 18:12:27.437034       1 cache.go:39] Caches are synced for autoregister controller
	I0823 18:12:27.443118       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0823 18:12:27.481999       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0823 18:12:28.325487       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0823 18:12:28.327337       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0823 18:12:28.327348       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0823 18:12:28.483580       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0823 18:12:28.494448       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0823 18:12:28.528987       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0823 18:12:28.530764       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.2]
	I0823 18:12:28.531188       1 controller.go:624] quota admission added evaluator for: endpoints
	I0823 18:12:28.532349       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0823 18:12:29.394948       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0823 18:12:30.000403       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0823 18:12:30.005168       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0823 18:12:30.011907       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0823 18:12:42.297794       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0823 18:12:42.999693       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0823 18:12:43.740564       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:12:50.388002       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.104.33.188"}
	I0823 18:12:50.406209       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0823 18:17:27.368575       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:22:27.369052       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:27:27.369080       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [9700e5974326] <==
	* I0823 18:13:04.466033       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0823 18:13:04.486978       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0823 18:13:05.394947       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0823 18:13:05.398434       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0823 18:13:05.468971       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0823 18:13:05.471795       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0823 18:13:05.473423       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0823 18:13:05.473440       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0823 18:13:05.488947       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0823 18:13:05.491052       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0823 18:13:05.492818       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0823 18:13:05.492904       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0823 18:13:09.490531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="5.007096ms"
	I0823 18:13:09.490673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="12.225µs"
	I0823 18:13:12.451688       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="traces.gadget.kinvolk.io"
	I0823 18:13:12.451711       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0823 18:13:12.552518       1 shared_informer.go:318] Caches are synced for resource quota
	I0823 18:13:12.777696       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0823 18:13:12.777713       1 shared_informer.go:318] Caches are synced for garbage collector
	I0823 18:13:23.581296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="4.891249ms"
	I0823 18:13:23.581368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.846µs"
	I0823 18:13:35.007690       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0823 18:13:35.008253       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0823 18:13:35.019213       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0823 18:13:35.019817       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	
	* 
	* ==> kube-proxy [20ee50fe57df] <==
	* I0823 18:12:44.324386       1 server_others.go:69] "Using iptables proxy"
	I0823 18:12:44.336042       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0823 18:12:44.344520       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0823 18:12:44.344533       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0823 18:12:44.345190       1 server_others.go:152] "Using iptables Proxier"
	I0823 18:12:44.345242       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0823 18:12:44.345409       1 server.go:846] "Version info" version="v1.28.0"
	I0823 18:12:44.345419       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0823 18:12:44.346019       1 config.go:188] "Starting service config controller"
	I0823 18:12:44.346074       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0823 18:12:44.346113       1 config.go:97] "Starting endpoint slice config controller"
	I0823 18:12:44.346120       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0823 18:12:44.346526       1 config.go:315] "Starting node config controller"
	I0823 18:12:44.346554       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0823 18:12:44.446872       1 shared_informer.go:318] Caches are synced for node config
	I0823 18:12:44.446887       1 shared_informer.go:318] Caches are synced for service config
	I0823 18:12:44.446897       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [695ccc939e29] <==
	* W0823 18:12:27.662067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0823 18:12:27.662073       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0823 18:12:27.662103       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662127       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662242       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0823 18:12:27.662264       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0823 18:12:27.662344       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662351       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0823 18:12:27.662399       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0823 18:12:27.662439       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0823 18:12:27.662446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0823 18:12:27.662483       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662491       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662520       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0823 18:12:27.662530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0823 18:12:27.662556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0823 18:12:27.662568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0823 18:12:27.662623       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0823 18:12:27.662630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0823 18:12:27.662674       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0823 18:12:27.662681       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0823 18:12:28.959674       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-23 18:12:13 UTC, ends at Wed 2023-08-23 18:30:44 UTC. --
	Aug 23 18:28:55 addons-603000 kubelet[2300]: E0823 18:28:55.071182    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:29:09 addons-603000 kubelet[2300]: I0823 18:29:09.071035    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:29:09 addons-603000 kubelet[2300]: E0823 18:29:09.071329    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:29:20 addons-603000 kubelet[2300]: I0823 18:29:20.071342    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:29:20 addons-603000 kubelet[2300]: E0823 18:29:20.071500    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:29:30 addons-603000 kubelet[2300]: E0823 18:29:30.076474    2300 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 23 18:29:30 addons-603000 kubelet[2300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 23 18:29:30 addons-603000 kubelet[2300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 23 18:29:30 addons-603000 kubelet[2300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 23 18:29:35 addons-603000 kubelet[2300]: I0823 18:29:35.070868    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:29:35 addons-603000 kubelet[2300]: E0823 18:29:35.070981    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:29:46 addons-603000 kubelet[2300]: I0823 18:29:46.072207    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:29:46 addons-603000 kubelet[2300]: E0823 18:29:46.072324    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:29:58 addons-603000 kubelet[2300]: I0823 18:29:58.071368    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:29:58 addons-603000 kubelet[2300]: E0823 18:29:58.071514    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:30:13 addons-603000 kubelet[2300]: I0823 18:30:13.071709    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:30:13 addons-603000 kubelet[2300]: E0823 18:30:13.071880    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:30:25 addons-603000 kubelet[2300]: I0823 18:30:25.071793    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:30:25 addons-603000 kubelet[2300]: E0823 18:30:25.071951    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:30:30 addons-603000 kubelet[2300]: E0823 18:30:30.076756    2300 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 23 18:30:30 addons-603000 kubelet[2300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 23 18:30:30 addons-603000 kubelet[2300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 23 18:30:30 addons-603000 kubelet[2300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 23 18:30:37 addons-603000 kubelet[2300]: I0823 18:30:37.071067    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:30:37 addons-603000 kubelet[2300]: E0823 18:30:37.071237    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-603000 -n addons-603000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-603000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (720.86s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-603000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Non-zero exit: kubectl --context addons-603000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (34.669417ms)

                                                
                                                
** stderr ** 
	error: no matching resources found

                                                
                                                
** /stderr **
addons_test.go:184: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-603000 -n addons-603000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-603000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:11 PDT |                     |
	|         | -p download-only-861000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:11 PDT |                     |
	|         | -p download-only-861000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| delete  | -p download-only-861000        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| delete  | -p download-only-861000        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| start   | --download-only -p             | binary-mirror-536000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT |                     |
	|         | binary-mirror-536000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49317         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-536000        | binary-mirror-536000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| start   | -p addons-603000               | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:18 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:30 PDT |                     |
	|         | addons-603000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:30 PDT | 23 Aug 23 11:30 PDT |
	|         | -p addons-603000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:43 PDT | 23 Aug 23 11:43 PDT |
	|         | addons-603000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/23 11:12:01
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0823 11:12:01.473993    1632 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:12:01.474116    1632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:12:01.474118    1632 out.go:309] Setting ErrFile to fd 2...
	I0823 11:12:01.474121    1632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:12:01.474234    1632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:12:01.475282    1632 out.go:303] Setting JSON to false
	I0823 11:12:01.490493    1632 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":694,"bootTime":1692813627,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:12:01.490568    1632 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:12:01.495699    1632 out.go:177] * [addons-603000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:12:01.502675    1632 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:12:01.502715    1632 notify.go:220] Checking for updates...
	I0823 11:12:01.509745    1632 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:12:01.512689    1632 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:12:01.515810    1632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:12:01.518747    1632 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:12:01.521752    1632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:12:01.524877    1632 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:12:01.528782    1632 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 11:12:01.535731    1632 start.go:298] selected driver: qemu2
	I0823 11:12:01.535736    1632 start.go:902] validating driver "qemu2" against <nil>
	I0823 11:12:01.535742    1632 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:12:01.537642    1632 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 11:12:01.540695    1632 out.go:177] * Automatically selected the socket_vmnet network
	I0823 11:12:01.543724    1632 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 11:12:01.543758    1632 cni.go:84] Creating CNI manager for ""
	I0823 11:12:01.543773    1632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:12:01.543776    1632 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 11:12:01.543783    1632 start_flags.go:319] config:
	{Name:addons-603000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0}
	I0823 11:12:01.547827    1632 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:12:01.555769    1632 out.go:177] * Starting control plane node addons-603000 in cluster addons-603000
	I0823 11:12:01.559683    1632 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:12:01.559703    1632 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 11:12:01.559720    1632 cache.go:57] Caching tarball of preloaded images
	I0823 11:12:01.559801    1632 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 11:12:01.559806    1632 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 11:12:01.560010    1632 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/config.json ...
	I0823 11:12:01.560023    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/config.json: {Name:mkc2981401a387d0fc588c73edb3c2135fa892ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:01.560240    1632 start.go:365] acquiring machines lock for addons-603000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:12:01.560349    1632 start.go:369] acquired machines lock for "addons-603000" in 102.292µs
	I0823 11:12:01.560360    1632 start.go:93] Provisioning new machine with config: &{Name:addons-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.28.0 ClusterName:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:12:01.560401    1632 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:12:01.568742    1632 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0823 11:12:01.911891    1632 start.go:159] libmachine.API.Create for "addons-603000" (driver="qemu2")
	I0823 11:12:01.911940    1632 client.go:168] LocalClient.Create starting
	I0823 11:12:01.912112    1632 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:12:02.016848    1632 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:12:02.196172    1632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:12:02.677712    1632 main.go:141] libmachine: Creating SSH key...
	I0823 11:12:02.710073    1632 main.go:141] libmachine: Creating Disk image...
	I0823 11:12:02.710078    1632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:12:02.710280    1632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2
	I0823 11:12:02.789119    1632 main.go:141] libmachine: STDOUT: 
	I0823 11:12:02.789143    1632 main.go:141] libmachine: STDERR: 
	I0823 11:12:02.789223    1632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2 +20000M
	I0823 11:12:02.796505    1632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:12:02.796516    1632 main.go:141] libmachine: STDERR: 
	I0823 11:12:02.796535    1632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2
	I0823 11:12:02.796546    1632 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:12:02.796585    1632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:3b:71:83:24:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2
	I0823 11:12:02.868731    1632 main.go:141] libmachine: STDOUT: 
	I0823 11:12:02.868763    1632 main.go:141] libmachine: STDERR: 
	I0823 11:12:02.868768    1632 main.go:141] libmachine: Attempt 0
	I0823 11:12:02.868785    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:04.870905    1632 main.go:141] libmachine: Attempt 1
	I0823 11:12:04.871005    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:06.873192    1632 main.go:141] libmachine: Attempt 2
	I0823 11:12:06.873220    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:08.875257    1632 main.go:141] libmachine: Attempt 3
	I0823 11:12:08.875269    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:10.877286    1632 main.go:141] libmachine: Attempt 4
	I0823 11:12:10.877310    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:12.879362    1632 main.go:141] libmachine: Attempt 5
	I0823 11:12:12.879382    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:14.881424    1632 main.go:141] libmachine: Attempt 6
	I0823 11:12:14.881454    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:14.881611    1632 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0823 11:12:14.881699    1632 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e79d7d}
	I0823 11:12:14.881706    1632 main.go:141] libmachine: Found match: da:3b:71:83:24:7a
	I0823 11:12:14.881720    1632 main.go:141] libmachine: IP: 192.168.105.2
	I0823 11:12:14.881729    1632 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0823 11:12:15.887997    1632 machine.go:88] provisioning docker machine ...
	I0823 11:12:15.888014    1632 buildroot.go:166] provisioning hostname "addons-603000"
	I0823 11:12:15.888459    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:15.888726    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:15.888732    1632 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-603000 && echo "addons-603000" | sudo tee /etc/hostname
	I0823 11:12:15.942083    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-603000
	
	I0823 11:12:15.942154    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:15.942421    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:15.942431    1632 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-603000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-603000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-603000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0823 11:12:15.994730    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0823 11:12:15.994741    1632 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17086-1133/.minikube CaCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17086-1133/.minikube}
	I0823 11:12:15.994754    1632 buildroot.go:174] setting up certificates
	I0823 11:12:15.994761    1632 provision.go:83] configureAuth start
	I0823 11:12:15.994765    1632 provision.go:138] copyHostCerts
	I0823 11:12:15.994890    1632 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.pem (1078 bytes)
	I0823 11:12:15.995099    1632 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/cert.pem (1123 bytes)
	I0823 11:12:15.995204    1632 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/key.pem (1679 bytes)
	I0823 11:12:15.995273    1632 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem org=jenkins.addons-603000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-603000]
	I0823 11:12:16.087853    1632 provision.go:172] copyRemoteCerts
	I0823 11:12:16.087919    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0823 11:12:16.087927    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.117611    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0823 11:12:16.124255    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0823 11:12:16.130804    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0823 11:12:16.138250    1632 provision.go:86] duration metric: configureAuth took 143.474625ms
	I0823 11:12:16.138257    1632 buildroot.go:189] setting minikube options for container-runtime
	I0823 11:12:16.138365    1632 config.go:182] Loaded profile config "addons-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:12:16.138406    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.138621    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.138626    1632 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0823 11:12:16.188528    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0823 11:12:16.188533    1632 buildroot.go:70] root file system type: tmpfs
	I0823 11:12:16.188589    1632 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0823 11:12:16.188629    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.188853    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.188886    1632 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0823 11:12:16.244639    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0823 11:12:16.244690    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.244933    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.244942    1632 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0823 11:12:16.607992    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0823 11:12:16.608005    1632 machine.go:91] provisioned docker machine in 720.015167ms
	I0823 11:12:16.608010    1632 client.go:171] LocalClient.Create took 14.696413292s
	I0823 11:12:16.608025    1632 start.go:167] duration metric: libmachine.API.Create for "addons-603000" took 14.696491208s
	I0823 11:12:16.608031    1632 start.go:300] post-start starting for "addons-603000" (driver="qemu2")
	I0823 11:12:16.608036    1632 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0823 11:12:16.608110    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0823 11:12:16.608120    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.635532    1632 ssh_runner.go:195] Run: cat /etc/os-release
	I0823 11:12:16.636917    1632 info.go:137] Remote host: Buildroot 2021.02.12
	I0823 11:12:16.636928    1632 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/addons for local assets ...
	I0823 11:12:16.636998    1632 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/files for local assets ...
	I0823 11:12:16.637031    1632 start.go:303] post-start completed in 28.997291ms
	I0823 11:12:16.637396    1632 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/config.json ...
	I0823 11:12:16.637545    1632 start.go:128] duration metric: createHost completed in 15.077498208s
	I0823 11:12:16.637587    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.637809    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.637813    1632 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0823 11:12:16.687915    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692814336.418860543
	
	I0823 11:12:16.687923    1632 fix.go:206] guest clock: 1692814336.418860543
	I0823 11:12:16.687928    1632 fix.go:219] Guest: 2023-08-23 11:12:16.418860543 -0700 PDT Remote: 2023-08-23 11:12:16.637549 -0700 PDT m=+15.183153626 (delta=-218.688457ms)
	I0823 11:12:16.687939    1632 fix.go:190] guest clock delta is within tolerance: -218.688457ms
	I0823 11:12:16.687945    1632 start.go:83] releasing machines lock for "addons-603000", held for 15.127950375s
	I0823 11:12:16.688248    1632 ssh_runner.go:195] Run: cat /version.json
	I0823 11:12:16.688258    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.688283    1632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0823 11:12:16.688315    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.830956    1632 ssh_runner.go:195] Run: systemctl --version
	I0823 11:12:16.833845    1632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0823 11:12:16.836334    1632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0823 11:12:16.836377    1632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0823 11:12:16.842975    1632 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0823 11:12:16.842984    1632 start.go:466] detecting cgroup driver to use...
	I0823 11:12:16.843114    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:12:16.850298    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0823 11:12:16.854361    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0823 11:12:16.858207    1632 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0823 11:12:16.858231    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0823 11:12:16.861904    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:12:16.865421    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0823 11:12:16.868360    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:12:16.871352    1632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0823 11:12:16.874459    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0823 11:12:16.878023    1632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0823 11:12:16.881145    1632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0823 11:12:16.883828    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:16.960568    1632 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0823 11:12:16.969157    1632 start.go:466] detecting cgroup driver to use...
	I0823 11:12:16.969235    1632 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0823 11:12:16.974606    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:12:16.980054    1632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0823 11:12:16.985654    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:12:16.990412    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 11:12:16.995694    1632 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0823 11:12:17.032758    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 11:12:17.037631    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:12:17.043070    1632 ssh_runner.go:195] Run: which cri-dockerd
	I0823 11:12:17.044366    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0823 11:12:17.046802    1632 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0823 11:12:17.051842    1632 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0823 11:12:17.122781    1632 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0823 11:12:17.203234    1632 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0823 11:12:17.203250    1632 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0823 11:12:17.208465    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:17.289831    1632 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0823 11:12:18.445051    1632 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.15523225s)
	I0823 11:12:18.445131    1632 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0823 11:12:18.516370    1632 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0823 11:12:18.597904    1632 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0823 11:12:18.678212    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:18.761579    1632 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0823 11:12:18.769154    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:18.854594    1632 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0823 11:12:18.877760    1632 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0823 11:12:18.877836    1632 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0823 11:12:18.879918    1632 start.go:534] Will wait 60s for crictl version
	I0823 11:12:18.879971    1632 ssh_runner.go:195] Run: which crictl
	I0823 11:12:18.881335    1632 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0823 11:12:18.895610    1632 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0823 11:12:18.895681    1632 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:12:18.905007    1632 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:12:18.918762    1632 out.go:204] * Preparing Kubernetes v1.28.0 on Docker 24.0.4 ...
	I0823 11:12:18.918902    1632 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0823 11:12:18.920430    1632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 11:12:18.924018    1632 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:12:18.924057    1632 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:12:18.929191    1632 docker.go:636] Got preloaded images: 
	I0823 11:12:18.929199    1632 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.0 wasn't preloaded
	I0823 11:12:18.929238    1632 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0823 11:12:18.932021    1632 ssh_runner.go:195] Run: which lz4
	I0823 11:12:18.933304    1632 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0823 11:12:18.934619    1632 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0823 11:12:18.934635    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356920680 bytes)
	I0823 11:12:20.248127    1632 docker.go:600] Took 1.314887 seconds to copy over tarball
	I0823 11:12:20.248187    1632 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0823 11:12:21.298381    1632 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.050206542s)
	I0823 11:12:21.298396    1632 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0823 11:12:21.314358    1632 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0823 11:12:21.317830    1632 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0823 11:12:21.322906    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:21.401236    1632 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0823 11:12:23.532716    1632 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.131514959s)
	I0823 11:12:23.532818    1632 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:12:23.539039    1632 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0823 11:12:23.539047    1632 cache_images.go:84] Images are preloaded, skipping loading
	I0823 11:12:23.539094    1632 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0823 11:12:23.547073    1632 cni.go:84] Creating CNI manager for ""
	I0823 11:12:23.547082    1632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:12:23.547113    1632 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0823 11:12:23.547124    1632 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-603000 NodeName:addons-603000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0823 11:12:23.547195    1632 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-603000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0823 11:12:23.547230    1632 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-603000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0823 11:12:23.547289    1632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0823 11:12:23.550418    1632 binaries.go:44] Found k8s binaries, skipping transfer
	I0823 11:12:23.550450    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0823 11:12:23.553530    1632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0823 11:12:23.558535    1632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0823 11:12:23.563462    1632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0823 11:12:23.568191    1632 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0823 11:12:23.569452    1632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 11:12:23.573456    1632 certs.go:56] Setting up /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000 for IP: 192.168.105.2
	I0823 11:12:23.573466    1632 certs.go:190] acquiring lock for shared ca certs: {Name:mk376b7368cd0c31aa9f4550a2157cb5219e493f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.573659    1632 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key
	I0823 11:12:23.735121    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt ...
	I0823 11:12:23.735127    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt: {Name:mk63b4dc9bf64742b3a2443ec5bb13fecbad997f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.735357    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key ...
	I0823 11:12:23.735361    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key: {Name:mk69cc2166a61f2546ed0135cfb5fd8f801bd14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.735483    1632 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key
	I0823 11:12:23.796242    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt ...
	I0823 11:12:23.796246    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt: {Name:mk419a1ff00ed72437ca83474e3a88fdfaeeeee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.796377    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key ...
	I0823 11:12:23.796380    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key: {Name:mk4c7f9a48478507429af69af483ca06fbc978e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.796514    1632 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.key
	I0823 11:12:23.796523    1632 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt with IP's: []
	I0823 11:12:24.000999    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt ...
	I0823 11:12:24.001008    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: {Name:mkf571a66a11fe461d7b44eaf6283e5341b1a7f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.001274    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.key ...
	I0823 11:12:24.001277    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.key: {Name:mkcf4093aeeeecec47bfe93d95e186f81025a994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.001388    1632 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969
	I0823 11:12:24.001398    1632 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0823 11:12:24.112607    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969 ...
	I0823 11:12:24.112611    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969: {Name:mkc8690ae4b550cd6392c142a6c0c10c4e6a5520 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.112766    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969 ...
	I0823 11:12:24.112769    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969: {Name:mkce56fce274146ffaf24072ae4ff4b5e0acf555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.112886    1632 certs.go:337] copying /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt
	I0823 11:12:24.113091    1632 certs.go:341] copying /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key
	I0823 11:12:24.113203    1632 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key
	I0823 11:12:24.113221    1632 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt with IP's: []
	I0823 11:12:24.164570    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt ...
	I0823 11:12:24.164574    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt: {Name:mk78bcc5a9a59a81499b830c0df5af65d761e828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.164724    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key ...
	I0823 11:12:24.164727    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key: {Name:mk7a6ea56c8d07401b6d3adda9d302f5989259f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.165026    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem (1675 bytes)
	I0823 11:12:24.165052    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem (1078 bytes)
	I0823 11:12:24.165076    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem (1123 bytes)
	I0823 11:12:24.165112    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem (1679 bytes)
	I0823 11:12:24.165474    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0823 11:12:24.173558    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0823 11:12:24.180163    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0823 11:12:24.186570    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0823 11:12:24.193721    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0823 11:12:24.200859    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0823 11:12:24.207727    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0823 11:12:24.214435    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0823 11:12:24.221806    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0823 11:12:24.228902    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0823 11:12:24.234839    1632 ssh_runner.go:195] Run: openssl version
	I0823 11:12:24.236947    1632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0823 11:12:24.239793    1632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:12:24.241347    1632 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 23 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:12:24.241366    1632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:12:24.243416    1632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0823 11:12:24.246590    1632 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0823 11:12:24.247919    1632 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0823 11:12:24.247965    1632 kubeadm.go:404] StartCluster: {Name:addons-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterNam
e:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:12:24.248029    1632 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0823 11:12:24.253769    1632 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0823 11:12:24.257096    1632 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0823 11:12:24.259889    1632 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0823 11:12:24.262662    1632 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0823 11:12:24.262675    1632 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0823 11:12:24.285841    1632 kubeadm.go:322] [init] Using Kubernetes version: v1.28.0
	I0823 11:12:24.285870    1632 kubeadm.go:322] [preflight] Running pre-flight checks
	I0823 11:12:24.340213    1632 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0823 11:12:24.340268    1632 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0823 11:12:24.340328    1632 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0823 11:12:24.397608    1632 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0823 11:12:24.406812    1632 out.go:204]   - Generating certificates and keys ...
	I0823 11:12:24.406874    1632 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0823 11:12:24.406907    1632 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0823 11:12:24.448173    1632 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0823 11:12:24.501859    1632 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0823 11:12:24.630928    1632 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0823 11:12:24.705637    1632 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0823 11:12:24.853624    1632 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0823 11:12:24.853703    1632 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-603000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0823 11:12:24.984031    1632 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0823 11:12:24.984098    1632 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-603000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0823 11:12:25.055121    1632 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0823 11:12:25.098919    1632 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0823 11:12:25.161126    1632 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0823 11:12:25.161156    1632 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0823 11:12:25.251684    1632 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0823 11:12:25.312453    1632 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0823 11:12:25.380616    1632 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0823 11:12:25.465178    1632 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0823 11:12:25.465413    1632 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0823 11:12:25.466498    1632 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0823 11:12:25.474829    1632 out.go:204]   - Booting up control plane ...
	I0823 11:12:25.474890    1632 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0823 11:12:25.474941    1632 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0823 11:12:25.474973    1632 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0823 11:12:25.475045    1632 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0823 11:12:25.475095    1632 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0823 11:12:25.475114    1632 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0823 11:12:25.557775    1632 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0823 11:12:29.059855    1632 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.502356 seconds
	I0823 11:12:29.059917    1632 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0823 11:12:29.066243    1632 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0823 11:12:29.575724    1632 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0823 11:12:29.575818    1632 kubeadm.go:322] [mark-control-plane] Marking the node addons-603000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0823 11:12:30.082813    1632 kubeadm.go:322] [bootstrap-token] Using token: l2h0hz.d0iyqenbl2749eth
	I0823 11:12:30.096111    1632 out.go:204]   - Configuring RBAC rules ...
	I0823 11:12:30.096178    1632 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0823 11:12:30.096240    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0823 11:12:30.098015    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0823 11:12:30.099101    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0823 11:12:30.100289    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0823 11:12:30.101319    1632 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0823 11:12:30.105824    1632 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0823 11:12:30.274483    1632 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0823 11:12:30.492880    1632 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0823 11:12:30.493261    1632 kubeadm.go:322] 
	I0823 11:12:30.493290    1632 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0823 11:12:30.493295    1632 kubeadm.go:322] 
	I0823 11:12:30.493350    1632 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0823 11:12:30.493355    1632 kubeadm.go:322] 
	I0823 11:12:30.493371    1632 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0823 11:12:30.493409    1632 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0823 11:12:30.493436    1632 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0823 11:12:30.493440    1632 kubeadm.go:322] 
	I0823 11:12:30.493470    1632 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0823 11:12:30.493473    1632 kubeadm.go:322] 
	I0823 11:12:30.493507    1632 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0823 11:12:30.493512    1632 kubeadm.go:322] 
	I0823 11:12:30.493534    1632 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0823 11:12:30.493571    1632 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0823 11:12:30.493608    1632 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0823 11:12:30.493612    1632 kubeadm.go:322] 
	I0823 11:12:30.493652    1632 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0823 11:12:30.493686    1632 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0823 11:12:30.493689    1632 kubeadm.go:322] 
	I0823 11:12:30.493755    1632 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token l2h0hz.d0iyqenbl2749eth \
	I0823 11:12:30.493808    1632 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bfb9bdd14f4bb2775ce7c1686c5d9f5d740ad24201ba0b194f17c1c3700b6f \
	I0823 11:12:30.493820    1632 kubeadm.go:322] 	--control-plane 
	I0823 11:12:30.493824    1632 kubeadm.go:322] 
	I0823 11:12:30.493863    1632 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0823 11:12:30.493867    1632 kubeadm.go:322] 
	I0823 11:12:30.493911    1632 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token l2h0hz.d0iyqenbl2749eth \
	I0823 11:12:30.493962    1632 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bfb9bdd14f4bb2775ce7c1686c5d9f5d740ad24201ba0b194f17c1c3700b6f 
	I0823 11:12:30.494020    1632 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0823 11:12:30.494027    1632 cni.go:84] Creating CNI manager for ""
	I0823 11:12:30.494035    1632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:12:30.502806    1632 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0823 11:12:30.507845    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0823 11:12:30.511861    1632 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0823 11:12:30.516952    1632 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0823 11:12:30.517017    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:30.517023    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=b0cad40b2c48c073112e0845fd1037a030ab0128 minikube.k8s.io/name=addons-603000 minikube.k8s.io/updated_at=2023_08_23T11_12_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:30.588330    1632 ops.go:34] apiserver oom_adj: -16
	I0823 11:12:30.588368    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:30.620791    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:31.158369    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:31.658375    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:32.158344    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:32.658320    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:33.158345    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:33.658343    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:34.158277    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:34.658250    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:35.158228    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:35.658245    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:36.158202    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:36.658170    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:37.156385    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:37.656540    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:38.158165    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:38.658197    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:39.158131    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:39.658110    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:40.158144    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:40.658128    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:41.158110    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:41.658102    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:42.158090    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:42.658123    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:43.158020    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:43.214120    1632 kubeadm.go:1081] duration metric: took 12.6974425s to wait for elevateKubeSystemPrivileges.
	I0823 11:12:43.214131    1632 kubeadm.go:406] StartCluster complete in 18.96661775s
	I0823 11:12:43.214140    1632 settings.go:142] acquiring lock: {Name:mk727aa2e008442bafb3bca409aa11c730e46bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:43.214301    1632 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:12:43.214543    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/kubeconfig: {Name:mk62eedfc40b21b153318958a231856d07330e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:43.214750    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0823 11:12:43.214793    1632 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0823 11:12:43.214848    1632 addons.go:69] Setting volumesnapshots=true in profile "addons-603000"
	I0823 11:12:43.214857    1632 addons.go:231] Setting addon volumesnapshots=true in "addons-603000"
	I0823 11:12:43.214861    1632 addons.go:69] Setting ingress=true in profile "addons-603000"
	I0823 11:12:43.214897    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.214901    1632 addons.go:69] Setting default-storageclass=true in profile "addons-603000"
	I0823 11:12:43.214897    1632 addons.go:231] Setting addon ingress=true in "addons-603000"
	I0823 11:12:43.214917    1632 addons.go:69] Setting cloud-spanner=true in profile "addons-603000"
	I0823 11:12:43.214937    1632 addons.go:69] Setting gcp-auth=true in profile "addons-603000"
	I0823 11:12:43.214943    1632 mustload.go:65] Loading cluster: addons-603000
	I0823 11:12:43.214929    1632 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-603000"
	I0823 11:12:43.214951    1632 addons.go:231] Setting addon cloud-spanner=true in "addons-603000"
	I0823 11:12:43.215174    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215232    1632 addons.go:69] Setting metrics-server=true in profile "addons-603000"
	I0823 11:12:43.215240    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215246    1632 addons.go:231] Setting addon metrics-server=true in "addons-603000"
	I0823 11:12:43.215279    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215384    1632 addons.go:69] Setting registry=true in profile "addons-603000"
	I0823 11:12:43.215406    1632 addons.go:231] Setting addon registry=true in "addons-603000"
	I0823 11:12:43.215406    1632 config.go:182] Loaded profile config "addons-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:12:43.215469    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215573    1632 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-603000"
	I0823 11:12:43.215613    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215623    1632 addons.go:69] Setting ingress-dns=true in profile "addons-603000"
	I0823 11:12:43.215632    1632 addons.go:69] Setting storage-provisioner=true in profile "addons-603000"
	I0823 11:12:43.215635    1632 addons.go:231] Setting addon ingress-dns=true in "addons-603000"
	I0823 11:12:43.215647    1632 addons.go:231] Setting addon storage-provisioner=true in "addons-603000"
	I0823 11:12:43.215715    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215748    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.214927    1632 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-603000"
	I0823 11:12:43.215926    1632 addons.go:69] Setting inspektor-gadget=true in profile "addons-603000"
	I0823 11:12:43.215930    1632 addons.go:231] Setting addon inspektor-gadget=true in "addons-603000"
	I0823 11:12:43.215950    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.216258    1632 config.go:182] Loaded profile config "addons-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	W0823 11:12:43.216285    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.216296    1632 addons.go:277] "addons-603000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0823 11:12:43.216301    1632 addons.go:467] Verifying addon ingress=true in "addons-603000"
	I0823 11:12:43.220697    1632 out.go:177] * Verifying ingress addon...
	W0823 11:12:43.216646    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.216836    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.216925    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217082    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217134    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217426    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217553    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	I0823 11:12:43.218066    1632 host.go:66] Checking if "addons-603000" exists ...
	W0823 11:12:43.229685    1632 addons.go:277] "addons-603000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229693    1632 addons.go:277] "addons-603000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229707    1632 addons_storage_classes.go:55] "addons-603000" is not running, writing default-storageclass=true to disk and skipping enablement
	W0823 11:12:43.229696    1632 addons.go:277] "addons-603000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229727    1632 addons.go:277] "addons-603000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229749    1632 addons.go:277] "addons-603000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0823 11:12:43.230154    1632 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0823 11:12:43.235671    1632 addons.go:277] "addons-603000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0823 11:12:43.235676    1632 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.19.0
	I0823 11:12:43.237659    1632 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-603000" context rescaled to 1 replicas
	I0823 11:12:43.239627    1632 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:12:43.245678    1632 out.go:177] * Verifying Kubernetes components...
	I0823 11:12:43.239670    1632 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0823 11:12:43.239676    1632 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-603000"
	I0823 11:12:43.239682    1632 addons.go:231] Setting addon default-storageclass=true in "addons-603000"
	I0823 11:12:43.239679    1632 addons.go:467] Verifying addon registry=true in "addons-603000"
	I0823 11:12:43.239684    1632 addons.go:467] Verifying addon metrics-server=true in "addons-603000"
	I0823 11:12:43.241522    1632 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0823 11:12:43.257706    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.257792    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 11:12:43.259109    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0823 11:12:43.263610    1632 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0823 11:12:43.264359    1632 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0823 11:12:43.269028    1632 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0823 11:12:43.275582    1632 out.go:177] * Verifying registry addon...
	I0823 11:12:43.281668    1632 out.go:177] * Verifying csi-hostpath-driver addon...
	I0823 11:12:43.281674    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0823 11:12:43.281680    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0823 11:12:43.281692    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0823 11:12:43.288063    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0823 11:12:43.293691    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:43.293700    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:43.293699    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:43.294283    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0823 11:12:43.297149    1632 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0823 11:12:43.298922    1632 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0823 11:12:43.392760    1632 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0823 11:12:43.392773    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0823 11:12:43.396919    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0823 11:12:43.403660    1632 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0823 11:12:43.403671    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0823 11:12:43.426051    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0823 11:12:43.442459    1632 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0823 11:12:43.442469    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0823 11:12:43.465118    1632 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0823 11:12:43.465127    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0823 11:12:43.485410    1632 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0823 11:12:43.485421    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0823 11:12:43.505701    1632 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0823 11:12:43.505712    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0823 11:12:43.510666    1632 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0823 11:12:43.510672    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0823 11:12:43.522915    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0823 11:12:43.857341    1632 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0823 11:12:43.857696    1632 node_ready.go:35] waiting up to 6m0s for node "addons-603000" to be "Ready" ...
	I0823 11:12:43.859423    1632 node_ready.go:49] node "addons-603000" has status "Ready":"True"
	I0823 11:12:43.859431    1632 node_ready.go:38] duration metric: took 1.726334ms waiting for node "addons-603000" to be "Ready" ...
	I0823 11:12:43.859435    1632 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:12:43.862557    1632 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace to be "Ready" ...
	I0823 11:12:45.872773    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:47.873129    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:49.841300    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0823 11:12:49.841316    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:49.876535    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0823 11:12:49.882169    1632 addons.go:231] Setting addon gcp-auth=true in "addons-603000"
	I0823 11:12:49.882191    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:49.882982    1632 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0823 11:12:49.882989    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:49.913095    1632 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0823 11:12:49.916035    1632 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0823 11:12:49.920011    1632 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0823 11:12:49.920017    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0823 11:12:49.927312    1632 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0823 11:12:49.927320    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0823 11:12:49.934731    1632 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0823 11:12:49.934739    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0823 11:12:49.943537    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0823 11:12:50.372936    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:50.550032    1632 addons.go:467] Verifying addon gcp-auth=true in "addons-603000"
	I0823 11:12:50.554005    1632 out.go:177] * Verifying gcp-auth addon...
	I0823 11:12:50.561304    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0823 11:12:50.564772    1632 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0823 11:12:50.564779    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:50.566533    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:51.070598    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:51.570353    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:52.070168    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:52.570325    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:52.873631    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:53.069304    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:53.570391    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:54.068627    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:54.570250    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:55.068767    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:55.371099    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:55.570140    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:56.070737    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:56.570638    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:57.070296    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:57.374140    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:57.570286    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:58.070358    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:58.569814    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:59.070104    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:59.570841    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:59.872601    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:00.070509    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:00.570371    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:01.070409    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:01.569429    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:02.069671    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:02.372554    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:02.571673    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:03.069684    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:03.570027    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:04.070029    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:04.569893    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:04.872229    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:05.069764    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:05.569750    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:06.070699    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:06.569829    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:06.872245    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:07.070111    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:07.569767    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:08.069905    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:08.571644    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:09.069600    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:09.372178    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:09.569699    1632 kapi.go:107] duration metric: took 19.008846917s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0823 11:13:09.574491    1632 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-603000 cluster.
	I0823 11:13:09.578500    1632 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0823 11:13:09.582495    1632 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0823 11:13:11.372160    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:13.871903    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:15.872253    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:18.371986    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:20.372421    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:22.871118    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:23.871704    1632 pod_ready.go:92] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.871715    1632 pod_ready.go:81] duration metric: took 40.010097208s waiting for pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.871719    1632 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.872661    1632 pod_ready.go:97] error getting pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-mfwg5" not found
	I0823 11:13:23.872669    1632 pod_ready.go:81] duration metric: took 947.666µs waiting for pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace to be "Ready" ...
	E0823 11:13:23.872673    1632 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-mfwg5" not found
	I0823 11:13:23.872689    1632 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.875032    1632 pod_ready.go:92] pod "etcd-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.875037    1632 pod_ready.go:81] duration metric: took 2.343625ms waiting for pod "etcd-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.875040    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.877331    1632 pod_ready.go:92] pod "kube-apiserver-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.877336    1632 pod_ready.go:81] duration metric: took 2.293291ms waiting for pod "kube-apiserver-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.877340    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.879623    1632 pod_ready.go:92] pod "kube-controller-manager-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.879629    1632 pod_ready.go:81] duration metric: took 2.287416ms waiting for pod "kube-controller-manager-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.879633    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wxvd5" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.072856    1632 pod_ready.go:92] pod "kube-proxy-wxvd5" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:24.072864    1632 pod_ready.go:81] duration metric: took 193.233541ms waiting for pod "kube-proxy-wxvd5" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.072869    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.472997    1632 pod_ready.go:92] pod "kube-scheduler-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:24.473008    1632 pod_ready.go:81] duration metric: took 400.145667ms waiting for pod "kube-scheduler-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.473012    1632 pod_ready.go:38] duration metric: took 40.614530833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:13:24.473025    1632 api_server.go:52] waiting for apiserver process to appear ...
	I0823 11:13:24.473121    1632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 11:13:24.478605    1632 api_server.go:72] duration metric: took 41.239942375s to wait for apiserver process to appear ...
	I0823 11:13:24.478610    1632 api_server.go:88] waiting for apiserver healthz status ...
	I0823 11:13:24.478617    1632 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0823 11:13:24.482269    1632 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0823 11:13:24.482906    1632 api_server.go:141] control plane version: v1.28.0
	I0823 11:13:24.482912    1632 api_server.go:131] duration metric: took 4.29975ms to wait for apiserver health ...
	I0823 11:13:24.482914    1632 system_pods.go:43] waiting for kube-system pods to appear ...
	I0823 11:13:24.674508    1632 system_pods.go:59] 7 kube-system pods found
	I0823 11:13:24.674519    1632 system_pods.go:61] "coredns-5dd5756b68-blj5d" [1fb66ddb-fafe-4785-b040-d0fd2587b3ae] Running
	I0823 11:13:24.674521    1632 system_pods.go:61] "etcd-addons-603000" [82b2b170-9259-4355-b5e0-2cf094533743] Running
	I0823 11:13:24.674524    1632 system_pods.go:61] "kube-apiserver-addons-603000" [1f3f8b80-c0bb-4e12-80fa-d87f557ae07a] Running
	I0823 11:13:24.674526    1632 system_pods.go:61] "kube-controller-manager-addons-603000" [726ada4c-a3da-4bac-99f2-83c60531a97e] Running
	I0823 11:13:24.674531    1632 system_pods.go:61] "kube-ingress-dns-minikube" [d0fb563e-9f6f-4cbe-b232-2f4956e99b67] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0823 11:13:24.674534    1632 system_pods.go:61] "kube-proxy-wxvd5" [7632c8d9-6979-41bb-a5a1-88f356962d6b] Running
	I0823 11:13:24.674539    1632 system_pods.go:61] "kube-scheduler-addons-603000" [606497e8-7ccf-4730-91d4-a1b0bb5f1c4f] Running
	I0823 11:13:24.674542    1632 system_pods.go:74] duration metric: took 191.630375ms to wait for pod list to return data ...
	I0823 11:13:24.674546    1632 default_sa.go:34] waiting for default service account to be created ...
	I0823 11:13:24.872182    1632 default_sa.go:45] found service account: "default"
	I0823 11:13:24.872197    1632 default_sa.go:55] duration metric: took 197.644583ms for default service account to be created ...
	I0823 11:13:24.872203    1632 system_pods.go:116] waiting for k8s-apps to be running ...
	I0823 11:13:25.073612    1632 system_pods.go:86] 7 kube-system pods found
	I0823 11:13:25.073624    1632 system_pods.go:89] "coredns-5dd5756b68-blj5d" [1fb66ddb-fafe-4785-b040-d0fd2587b3ae] Running
	I0823 11:13:25.073627    1632 system_pods.go:89] "etcd-addons-603000" [82b2b170-9259-4355-b5e0-2cf094533743] Running
	I0823 11:13:25.073629    1632 system_pods.go:89] "kube-apiserver-addons-603000" [1f3f8b80-c0bb-4e12-80fa-d87f557ae07a] Running
	I0823 11:13:25.073632    1632 system_pods.go:89] "kube-controller-manager-addons-603000" [726ada4c-a3da-4bac-99f2-83c60531a97e] Running
	I0823 11:13:25.073636    1632 system_pods.go:89] "kube-ingress-dns-minikube" [d0fb563e-9f6f-4cbe-b232-2f4956e99b67] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0823 11:13:25.073641    1632 system_pods.go:89] "kube-proxy-wxvd5" [7632c8d9-6979-41bb-a5a1-88f356962d6b] Running
	I0823 11:13:25.073644    1632 system_pods.go:89] "kube-scheduler-addons-603000" [606497e8-7ccf-4730-91d4-a1b0bb5f1c4f] Running
	I0823 11:13:25.073647    1632 system_pods.go:126] duration metric: took 201.446458ms to wait for k8s-apps to be running ...
	I0823 11:13:25.073650    1632 system_svc.go:44] waiting for kubelet service to be running ....
	I0823 11:13:25.073715    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 11:13:25.078691    1632 system_svc.go:56] duration metric: took 5.037209ms WaitForService to wait for kubelet.
	I0823 11:13:25.078704    1632 kubeadm.go:581] duration metric: took 41.84005475s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0823 11:13:25.078718    1632 node_conditions.go:102] verifying NodePressure condition ...
	I0823 11:13:25.271033    1632 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0823 11:13:25.271069    1632 node_conditions.go:123] node cpu capacity is 2
	I0823 11:13:25.271075    1632 node_conditions.go:105] duration metric: took 192.359208ms to run NodePressure ...
	I0823 11:13:25.271080    1632 start.go:228] waiting for startup goroutines ...
	I0823 11:18:43.244336    1632 kapi.go:107] duration metric: took 6m0.011535667s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0823 11:18:43.244405    1632 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0823 11:18:43.298298    1632 kapi.go:107] duration metric: took 6m0.001378042s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0823 11:18:43.298326    1632 kapi.go:107] duration metric: took 6m0.00762275s to wait for kubernetes.io/minikube-addons=registry ...
	W0823 11:18:43.298353    1632 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	W0823 11:18:43.298328    1632 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0823 11:18:43.304474    1632 out.go:177] * Enabled addons: volumesnapshots, cloud-spanner, storage-provisioner, metrics-server, ingress-dns, default-storageclass, inspektor-gadget, gcp-auth
	I0823 11:18:43.311559    1632 addons.go:502] enable addons completed in 6m0.094130167s: enabled=[volumesnapshots cloud-spanner storage-provisioner metrics-server ingress-dns default-storageclass inspektor-gadget gcp-auth]
	I0823 11:18:43.311576    1632 start.go:233] waiting for cluster config update ...
	I0823 11:18:43.311585    1632 start.go:242] writing updated cluster config ...
	I0823 11:18:43.312012    1632 ssh_runner.go:195] Run: rm -f paused
	I0823 11:18:43.342628    1632 start.go:600] kubectl: 1.27.2, cluster: 1.28.0 (minor skew: 1)
	I0823 11:18:43.345571    1632 out.go:177] * Done! kubectl is now configured to use "addons-603000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-08-23 18:12:13 UTC, ends at Wed 2023-08-23 18:43:06 UTC. --
	Aug 23 18:30:49 addons-603000 dockerd[1103]: time="2023-08-23T18:30:49.216936204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:33:50 addons-603000 dockerd[1103]: time="2023-08-23T18:33:50.091151665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:33:50 addons-603000 dockerd[1103]: time="2023-08-23T18:33:50.091185331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:33:50 addons-603000 dockerd[1103]: time="2023-08-23T18:33:50.091194206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:33:50 addons-603000 dockerd[1103]: time="2023-08-23T18:33:50.091200122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:33:50 addons-603000 dockerd[1103]: time="2023-08-23T18:33:50.134236915Z" level=info msg="shim disconnected" id=c5a1a543d888cc67f431de8db9e4746af52b47a1833c348a10757deec92410c6 namespace=moby
	Aug 23 18:33:50 addons-603000 dockerd[1097]: time="2023-08-23T18:33:50.134449577Z" level=info msg="ignoring event" container=c5a1a543d888cc67f431de8db9e4746af52b47a1833c348a10757deec92410c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:33:50 addons-603000 dockerd[1103]: time="2023-08-23T18:33:50.134489534Z" level=warning msg="cleaning up after shim disconnected" id=c5a1a543d888cc67f431de8db9e4746af52b47a1833c348a10757deec92410c6 namespace=moby
	Aug 23 18:33:50 addons-603000 dockerd[1103]: time="2023-08-23T18:33:50.134494826Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.113475782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.113502490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.113681656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.113691239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:39:02 addons-603000 dockerd[1097]: time="2023-08-23T18:39:02.160574422Z" level=info msg="ignoring event" container=77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.160721255Z" level=info msg="shim disconnected" id=77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38 namespace=moby
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.160747005Z" level=warning msg="cleaning up after shim disconnected" id=77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38 namespace=moby
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.160751297Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:43:01 addons-603000 dockerd[1097]: time="2023-08-23T18:43:01.602194723Z" level=info msg="ignoring event" container=db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:43:01 addons-603000 dockerd[1103]: time="2023-08-23T18:43:01.602976683Z" level=info msg="shim disconnected" id=db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23 namespace=moby
	Aug 23 18:43:01 addons-603000 dockerd[1103]: time="2023-08-23T18:43:01.603006392Z" level=warning msg="cleaning up after shim disconnected" id=db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23 namespace=moby
	Aug 23 18:43:01 addons-603000 dockerd[1103]: time="2023-08-23T18:43:01.603077642Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:43:01 addons-603000 dockerd[1103]: time="2023-08-23T18:43:01.627823549Z" level=info msg="shim disconnected" id=12623df0285522cf53d94a7432712282adb20613e1476bb55ab19ed6cc513390 namespace=moby
	Aug 23 18:43:01 addons-603000 dockerd[1103]: time="2023-08-23T18:43:01.627855299Z" level=warning msg="cleaning up after shim disconnected" id=12623df0285522cf53d94a7432712282adb20613e1476bb55ab19ed6cc513390 namespace=moby
	Aug 23 18:43:01 addons-603000 dockerd[1103]: time="2023-08-23T18:43:01.627860883Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:43:01 addons-603000 dockerd[1097]: time="2023-08-23T18:43:01.627995300Z" level=info msg="ignoring event" container=12623df0285522cf53d94a7432712282adb20613e1476bb55ab19ed6cc513390 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID
	77d7b0e8fb48b       1499ed4fbd0aa                                                                                                  4 minutes ago       Exited              minikube-ingress-dns      10                  ee55b672623b1
	c3aac1112826c       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98          12 minutes ago      Running             headlamp                  0                   1ce470d3ac5db
	e852eb8754f9d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf   29 minutes ago      Running             gcp-auth                  0                   b3f97bd7715ac
	20ee50fe57df9       940f54a5bcae9                                                                                                  30 minutes ago      Running             kube-proxy                0                   046b3dd276694
	eb22253f397d0       97e04611ad434                                                                                                  30 minutes ago      Running             coredns                   0                   83362738c741a
	695ccc939e29c       762dce4090c5f                                                                                                  30 minutes ago      Running             kube-scheduler            0                   cd7498705ccfc
	0d85c39c58b71       9cdd6470f48c8                                                                                                  30 minutes ago      Running             etcd                      0                   52014f9a89d58
	9700e59743269       46cc66ccc7c19                                                                                                  30 minutes ago      Running             kube-controller-manager   0                   0e19c8d65b488
	594c65a938703       00543d2fe5d71                                                                                                  30 minutes ago      Running             kube-apiserver            0                   e5952ff9dbb6d
	
	* 
	* ==> coredns [eb22253f397d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:48978 - 47852 "HINFO IN 1860146809240780526.8201528298968188454. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004478075s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-603000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-603000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b0cad40b2c48c073112e0845fd1037a030ab0128
	                    minikube.k8s.io/name=addons-603000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_23T11_12_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 23 Aug 2023 18:12:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-603000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 23 Aug 2023 18:43:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 23 Aug 2023 18:41:14 +0000   Wed, 23 Aug 2023 18:12:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 23 Aug 2023 18:41:14 +0000   Wed, 23 Aug 2023 18:12:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 23 Aug 2023 18:41:14 +0000   Wed, 23 Aug 2023 18:12:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 23 Aug 2023 18:41:14 +0000   Wed, 23 Aug 2023 18:12:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-603000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 d563976ae62b43f8a1fb53a177d29cdd
	  System UUID:                d563976ae62b43f8a1fb53a177d29cdd
	  Boot ID:                    95f14d55-f8c9-49e4-bb2e-505c410326e7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-d4c87556c-k5hw7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  headlamp                    headlamp-699c48fb74-fhmj2                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-blj5d                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     30m
	  kube-system                 etcd-addons-603000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         30m
	  kube-system                 kube-apiserver-addons-603000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-controller-manager-addons-603000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-ingress-dns-minikube                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-proxy-wxvd5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-scheduler-addons-603000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 30m   kube-proxy       
	  Normal  Starting                 30m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  30m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30m   kubelet          Node addons-603000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30m   kubelet          Node addons-603000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30m   kubelet          Node addons-603000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                30m   kubelet          Node addons-603000 status is now: NodeReady
	  Normal  RegisteredNode           30m   node-controller  Node addons-603000 event: Registered Node addons-603000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.645801] EINJ: EINJ table not found.
	[  +0.508913] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043174] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000868] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.060149] systemd-fstab-generator[476]: Ignoring "noauto" for root device
	[  +0.082417] systemd-fstab-generator[487]: Ignoring "noauto" for root device
	[  +0.495013] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.161467] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +0.082809] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[  +0.085797] systemd-fstab-generator[726]: Ignoring "noauto" for root device
	[  +1.141175] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.084612] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +0.080569] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +0.082356] systemd-fstab-generator[935]: Ignoring "noauto" for root device
	[  +0.081344] systemd-fstab-generator[946]: Ignoring "noauto" for root device
	[  +0.093418] systemd-fstab-generator[988]: Ignoring "noauto" for root device
	[  +2.547960] systemd-fstab-generator[1090]: Ignoring "noauto" for root device
	[  +2.110352] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.040486] systemd-fstab-generator[1407]: Ignoring "noauto" for root device
	[  +4.639082] systemd-fstab-generator[2281]: Ignoring "noauto" for root device
	[ +13.918576] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.422998] kauditd_printk_skb: 19 callbacks suppressed
	[  +9.996433] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Aug23 18:13] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.103613] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [0d85c39c58b7] <==
	* {"level":"info","ts":"2023-08-23T18:12:26.555101Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:12:26.555137Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:12:26.551069Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-23T18:12:26.555541Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-23T18:12:26.563277Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-23T18:12:26.563307Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-08-23T18:12:48.393724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.007861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-blj5d\" ","response":"range_response_count:1 size:4755"}
	{"level":"info","ts":"2023-08-23T18:12:48.39376Z","caller":"traceutil/trace.go:171","msg":"trace[2046528636] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-blj5d; range_end:; response_count:1; response_revision:423; }","duration":"169.059532ms","start":"2023-08-23T18:12:48.224693Z","end":"2023-08-23T18:12:48.393753Z","steps":["trace[2046528636] 'range keys from in-memory index tree'  (duration: 168.95968ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-23T18:12:48.39392Z","caller":"traceutil/trace.go:171","msg":"trace[720409665] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"100.636151ms","start":"2023-08-23T18:12:48.29328Z","end":"2023-08-23T18:12:48.393917Z","steps":["trace[720409665] 'process raft request'  (duration: 77.612181ms)","trace[720409665] 'compare'  (duration: 22.773829ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-23T18:12:50.777156Z","caller":"traceutil/trace.go:171","msg":"trace[471484672] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"104.054592ms","start":"2023-08-23T18:12:50.673089Z","end":"2023-08-23T18:12:50.777143Z","steps":["trace[471484672] 'process raft request'  (duration: 103.971962ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-23T18:22:26.650207Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":655}
	{"level":"info","ts":"2023-08-23T18:22:26.650931Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":655,"took":"518.817µs","hash":1262660470}
	{"level":"info","ts":"2023-08-23T18:22:26.650949Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1262660470,"revision":655,"compact-revision":-1}
	{"level":"info","ts":"2023-08-23T18:27:26.652533Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":748}
	{"level":"info","ts":"2023-08-23T18:27:26.652937Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":748,"took":"267.17µs","hash":3279716501}
	{"level":"info","ts":"2023-08-23T18:27:26.652952Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3279716501,"revision":748,"compact-revision":655}
	{"level":"info","ts":"2023-08-23T18:32:26.655732Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":842}
	{"level":"info","ts":"2023-08-23T18:32:26.656056Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":842,"took":"197.577µs","hash":1969901087}
	{"level":"info","ts":"2023-08-23T18:32:26.656068Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1969901087,"revision":842,"compact-revision":748}
	{"level":"info","ts":"2023-08-23T18:37:26.658244Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2023-08-23T18:37:26.658629Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":973,"took":"234.457µs","hash":4022607794}
	{"level":"info","ts":"2023-08-23T18:37:26.658641Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4022607794,"revision":973,"compact-revision":842}
	{"level":"info","ts":"2023-08-23T18:42:26.66149Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1071}
	{"level":"info","ts":"2023-08-23T18:42:26.661796Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1071,"took":"188.042µs","hash":2460628194}
	{"level":"info","ts":"2023-08-23T18:42:26.661807Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2460628194,"revision":1071,"compact-revision":973}
	
	* 
	* ==> gcp-auth [e852eb8754f9] <==
	* 2023/08/23 18:13:08 GCP Auth Webhook started!
	2023/08/23 18:13:18 failed to get releases file: Get "https://storage.googleapis.com/minikube-gcp-auth/releases.json": dial tcp: lookup storage.googleapis.com on 10.96.0.10:53: read udp 10.244.0.5:60048->10.96.0.10:53: read: connection refused
	2023/08/23 18:30:44 Ready to marshal response ...
	2023/08/23 18:30:44 Ready to write response ...
	2023/08/23 18:30:44 Ready to marshal response ...
	2023/08/23 18:30:44 Ready to write response ...
	2023/08/23 18:30:44 Ready to marshal response ...
	2023/08/23 18:30:44 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:43:07 up 30 min,  0 users,  load average: 0.05, 0.12, 0.10
	Linux addons-603000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [594c65a93870] <==
	* I0823 18:12:28.528987       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0823 18:12:28.530764       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.2]
	I0823 18:12:28.531188       1 controller.go:624] quota admission added evaluator for: endpoints
	I0823 18:12:28.532349       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0823 18:12:29.394948       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0823 18:12:30.000403       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0823 18:12:30.005168       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0823 18:12:30.011907       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0823 18:12:42.297794       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0823 18:12:42.999693       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0823 18:12:43.740564       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:12:50.388002       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.104.33.188"}
	I0823 18:12:50.406209       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0823 18:17:27.368575       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:22:27.369052       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:27:27.369080       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:30:44.504708       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.202.15"}
	I0823 18:32:27.369968       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:37:27.370986       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:42:27.371798       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:43:01.533841       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:43:01.538319       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0823 18:43:01.547346       1 controller.go:159] removing "v1alpha1.gadget.kinvolk.io" from AggregationController failed with: resource not found
	E0823 18:43:01.547358       1 controller.go:159] removing "v1alpha1.gadget.kinvolk.io" from AggregationController failed with: resource not found
	W0823 18:43:02.544801       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	* 
	* ==> kube-controller-manager [9700e5974326] <==
	* E0823 18:40:42.304147       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:40:42.304173       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:40:57.304649       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:40:57.304835       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:41:12.305005       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:41:12.305092       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:41:27.305078       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:41:27.305175       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:41:42.305198       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:41:42.305221       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:41:57.305830       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:41:57.305963       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:42:12.306917       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:42:12.306942       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:42:27.308252       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:42:27.308335       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:42:42.308556       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:42:42.308592       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:42:57.309601       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:42:57.309771       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:43:02.545714       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0823 18:43:03.988873       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0823 18:43:03.988894       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0823 18:43:06.966427       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0823 18:43:06.966446       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [20ee50fe57df] <==
	* I0823 18:12:44.324386       1 server_others.go:69] "Using iptables proxy"
	I0823 18:12:44.336042       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0823 18:12:44.344520       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0823 18:12:44.344533       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0823 18:12:44.345190       1 server_others.go:152] "Using iptables Proxier"
	I0823 18:12:44.345242       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0823 18:12:44.345409       1 server.go:846] "Version info" version="v1.28.0"
	I0823 18:12:44.345419       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0823 18:12:44.346019       1 config.go:188] "Starting service config controller"
	I0823 18:12:44.346074       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0823 18:12:44.346113       1 config.go:97] "Starting endpoint slice config controller"
	I0823 18:12:44.346120       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0823 18:12:44.346526       1 config.go:315] "Starting node config controller"
	I0823 18:12:44.346554       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0823 18:12:44.446872       1 shared_informer.go:318] Caches are synced for node config
	I0823 18:12:44.446887       1 shared_informer.go:318] Caches are synced for service config
	I0823 18:12:44.446897       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [695ccc939e29] <==
	* W0823 18:12:27.662067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0823 18:12:27.662073       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0823 18:12:27.662103       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662127       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662242       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0823 18:12:27.662264       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0823 18:12:27.662344       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662351       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0823 18:12:27.662399       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0823 18:12:27.662439       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0823 18:12:27.662446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0823 18:12:27.662483       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662491       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662520       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0823 18:12:27.662530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0823 18:12:27.662556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0823 18:12:27.662568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0823 18:12:27.662623       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0823 18:12:27.662630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0823 18:12:27.662674       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0823 18:12:27.662681       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0823 18:12:28.959674       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-23 18:12:13 UTC, ends at Wed 2023-08-23 18:43:07 UTC. --
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.660684    2300 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-debugfs\") pod \"d059753c-0bc7-41e8-8616-dcfd9769bf59\" (UID: \"d059753c-0bc7-41e8-8616-dcfd9769bf59\") "
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.660693    2300 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-run\") pod \"d059753c-0bc7-41e8-8616-dcfd9769bf59\" (UID: \"d059753c-0bc7-41e8-8616-dcfd9769bf59\") "
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.660700    2300 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-cgroup\") pod \"d059753c-0bc7-41e8-8616-dcfd9769bf59\" (UID: \"d059753c-0bc7-41e8-8616-dcfd9769bf59\") "
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.660708    2300 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-bpffs\") pod \"d059753c-0bc7-41e8-8616-dcfd9769bf59\" (UID: \"d059753c-0bc7-41e8-8616-dcfd9769bf59\") "
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.660739    2300 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-bpffs" (OuterVolumeSpecName: "bpffs") pod "d059753c-0bc7-41e8-8616-dcfd9769bf59" (UID: "d059753c-0bc7-41e8-8616-dcfd9769bf59"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.660753    2300 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-host" (OuterVolumeSpecName: "host") pod "d059753c-0bc7-41e8-8616-dcfd9769bf59" (UID: "d059753c-0bc7-41e8-8616-dcfd9769bf59"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.660918    2300 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-debugfs" (OuterVolumeSpecName: "debugfs") pod "d059753c-0bc7-41e8-8616-dcfd9769bf59" (UID: "d059753c-0bc7-41e8-8616-dcfd9769bf59"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.660942    2300 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-modules" (OuterVolumeSpecName: "modules") pod "d059753c-0bc7-41e8-8616-dcfd9769bf59" (UID: "d059753c-0bc7-41e8-8616-dcfd9769bf59"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.660952    2300 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-run" (OuterVolumeSpecName: "run") pod "d059753c-0bc7-41e8-8616-dcfd9769bf59" (UID: "d059753c-0bc7-41e8-8616-dcfd9769bf59"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.660959    2300 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-cgroup" (OuterVolumeSpecName: "cgroup") pod "d059753c-0bc7-41e8-8616-dcfd9769bf59" (UID: "d059753c-0bc7-41e8-8616-dcfd9769bf59"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.661565    2300 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d059753c-0bc7-41e8-8616-dcfd9769bf59-kube-api-access-74dj4" (OuterVolumeSpecName: "kube-api-access-74dj4") pod "d059753c-0bc7-41e8-8616-dcfd9769bf59" (UID: "d059753c-0bc7-41e8-8616-dcfd9769bf59"). InnerVolumeSpecName "kube-api-access-74dj4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.760950    2300 reconciler_common.go:300] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-host\") on node \"addons-603000\" DevicePath \"\""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.760968    2300 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-74dj4\" (UniqueName: \"kubernetes.io/projected/d059753c-0bc7-41e8-8616-dcfd9769bf59-kube-api-access-74dj4\") on node \"addons-603000\" DevicePath \"\""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.760973    2300 reconciler_common.go:300] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-modules\") on node \"addons-603000\" DevicePath \"\""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.760977    2300 reconciler_common.go:300] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-debugfs\") on node \"addons-603000\" DevicePath \"\""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.760982    2300 reconciler_common.go:300] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-bpffs\") on node \"addons-603000\" DevicePath \"\""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.760986    2300 reconciler_common.go:300] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-run\") on node \"addons-603000\" DevicePath \"\""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.760990    2300 reconciler_common.go:300] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-cgroup\") on node \"addons-603000\" DevicePath \"\""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.866600    2300 scope.go:117] "RemoveContainer" containerID="db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23"
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.875377    2300 scope.go:117] "RemoveContainer" containerID="db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23"
	Aug 23 18:43:01 addons-603000 kubelet[2300]: E0823 18:43:01.875796    2300 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23" containerID="db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23"
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.875827    2300 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23"} err="failed to get container status \"db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23\": rpc error: code = Unknown desc = Error response from daemon: No such container: db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23"
	Aug 23 18:43:02 addons-603000 kubelet[2300]: I0823 18:43:02.073538    2300 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d059753c-0bc7-41e8-8616-dcfd9769bf59" path="/var/lib/kubelet/pods/d059753c-0bc7-41e8-8616-dcfd9769bf59/volumes"
	Aug 23 18:43:06 addons-603000 kubelet[2300]: I0823 18:43:06.071662    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:43:06 addons-603000 kubelet[2300]: E0823 18:43:06.072085    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-603000 -n addons-603000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-603000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (0.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (720.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:381: failed waiting for metrics-server deployment to stabilize: timed out waiting for the condition
addons_test.go:383: metrics-server stabilized in 6m0.002227916s
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
addons_test.go:385: ***** TestAddons/parallel/MetricsServer: pod "k8s-app=metrics-server" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:385: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-603000 -n addons-603000
addons_test.go:385: TestAddons/parallel/MetricsServer: showing logs for failed pods as of 2023-08-23 11:44:23.734225 -0700 PDT m=+1981.964275417
addons_test.go:386: failed waiting for k8s-app=metrics-server pod: k8s-app=metrics-server within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-603000 -n addons-603000
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-603000 logs -n 25
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:11 PDT |                     |
	|         | -p download-only-861000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:11 PDT |                     |
	|         | -p download-only-861000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| delete  | -p download-only-861000        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| delete  | -p download-only-861000        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| start   | --download-only -p             | binary-mirror-536000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT |                     |
	|         | binary-mirror-536000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49317         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-536000        | binary-mirror-536000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| start   | -p addons-603000               | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:18 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:30 PDT |                     |
	|         | addons-603000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:30 PDT | 23 Aug 23 11:30 PDT |
	|         | -p addons-603000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:43 PDT | 23 Aug 23 11:43 PDT |
	|         | addons-603000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/23 11:12:01
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0823 11:12:01.473993    1632 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:12:01.474116    1632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:12:01.474118    1632 out.go:309] Setting ErrFile to fd 2...
	I0823 11:12:01.474121    1632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:12:01.474234    1632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:12:01.475282    1632 out.go:303] Setting JSON to false
	I0823 11:12:01.490493    1632 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":694,"bootTime":1692813627,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:12:01.490568    1632 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:12:01.495699    1632 out.go:177] * [addons-603000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:12:01.502675    1632 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:12:01.502715    1632 notify.go:220] Checking for updates...
	I0823 11:12:01.509745    1632 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:12:01.512689    1632 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:12:01.515810    1632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:12:01.518747    1632 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:12:01.521752    1632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:12:01.524877    1632 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:12:01.528782    1632 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 11:12:01.535731    1632 start.go:298] selected driver: qemu2
	I0823 11:12:01.535736    1632 start.go:902] validating driver "qemu2" against <nil>
	I0823 11:12:01.535742    1632 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:12:01.537642    1632 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 11:12:01.540695    1632 out.go:177] * Automatically selected the socket_vmnet network
	I0823 11:12:01.543724    1632 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 11:12:01.543758    1632 cni.go:84] Creating CNI manager for ""
	I0823 11:12:01.543773    1632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:12:01.543776    1632 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 11:12:01.543783    1632 start_flags.go:319] config:
	{Name:addons-603000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0}
	I0823 11:12:01.547827    1632 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:12:01.555769    1632 out.go:177] * Starting control plane node addons-603000 in cluster addons-603000
	I0823 11:12:01.559683    1632 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:12:01.559703    1632 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 11:12:01.559720    1632 cache.go:57] Caching tarball of preloaded images
	I0823 11:12:01.559801    1632 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 11:12:01.559806    1632 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 11:12:01.560010    1632 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/config.json ...
	I0823 11:12:01.560023    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/config.json: {Name:mkc2981401a387d0fc588c73edb3c2135fa892ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:01.560240    1632 start.go:365] acquiring machines lock for addons-603000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:12:01.560349    1632 start.go:369] acquired machines lock for "addons-603000" in 102.292µs
	I0823 11:12:01.560360    1632 start.go:93] Provisioning new machine with config: &{Name:addons-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.28.0 ClusterName:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:12:01.560401    1632 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:12:01.568742    1632 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0823 11:12:01.911891    1632 start.go:159] libmachine.API.Create for "addons-603000" (driver="qemu2")
	I0823 11:12:01.911940    1632 client.go:168] LocalClient.Create starting
	I0823 11:12:01.912112    1632 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:12:02.016848    1632 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:12:02.196172    1632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:12:02.677712    1632 main.go:141] libmachine: Creating SSH key...
	I0823 11:12:02.710073    1632 main.go:141] libmachine: Creating Disk image...
	I0823 11:12:02.710078    1632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:12:02.710280    1632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2
	I0823 11:12:02.789119    1632 main.go:141] libmachine: STDOUT: 
	I0823 11:12:02.789143    1632 main.go:141] libmachine: STDERR: 
	I0823 11:12:02.789223    1632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2 +20000M
	I0823 11:12:02.796505    1632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:12:02.796516    1632 main.go:141] libmachine: STDERR: 
	I0823 11:12:02.796535    1632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2
	I0823 11:12:02.796546    1632 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:12:02.796585    1632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:3b:71:83:24:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2
	I0823 11:12:02.868731    1632 main.go:141] libmachine: STDOUT: 
	I0823 11:12:02.868763    1632 main.go:141] libmachine: STDERR: 
	I0823 11:12:02.868768    1632 main.go:141] libmachine: Attempt 0
	I0823 11:12:02.868785    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:04.870905    1632 main.go:141] libmachine: Attempt 1
	I0823 11:12:04.871005    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:06.873192    1632 main.go:141] libmachine: Attempt 2
	I0823 11:12:06.873220    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:08.875257    1632 main.go:141] libmachine: Attempt 3
	I0823 11:12:08.875269    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:10.877286    1632 main.go:141] libmachine: Attempt 4
	I0823 11:12:10.877310    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:12.879362    1632 main.go:141] libmachine: Attempt 5
	I0823 11:12:12.879382    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:14.881424    1632 main.go:141] libmachine: Attempt 6
	I0823 11:12:14.881454    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:14.881611    1632 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0823 11:12:14.881699    1632 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e79d7d}
	I0823 11:12:14.881706    1632 main.go:141] libmachine: Found match: da:3b:71:83:24:7a
	I0823 11:12:14.881720    1632 main.go:141] libmachine: IP: 192.168.105.2
	I0823 11:12:14.881729    1632 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0823 11:12:15.887997    1632 machine.go:88] provisioning docker machine ...
	I0823 11:12:15.888014    1632 buildroot.go:166] provisioning hostname "addons-603000"
	I0823 11:12:15.888459    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:15.888726    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:15.888732    1632 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-603000 && echo "addons-603000" | sudo tee /etc/hostname
	I0823 11:12:15.942083    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-603000
	
	I0823 11:12:15.942154    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:15.942421    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:15.942431    1632 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-603000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-603000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-603000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0823 11:12:15.994730    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0823 11:12:15.994741    1632 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17086-1133/.minikube CaCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17086-1133/.minikube}
	I0823 11:12:15.994754    1632 buildroot.go:174] setting up certificates
	I0823 11:12:15.994761    1632 provision.go:83] configureAuth start
	I0823 11:12:15.994765    1632 provision.go:138] copyHostCerts
	I0823 11:12:15.994890    1632 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.pem (1078 bytes)
	I0823 11:12:15.995099    1632 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/cert.pem (1123 bytes)
	I0823 11:12:15.995204    1632 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/key.pem (1679 bytes)
	I0823 11:12:15.995273    1632 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem org=jenkins.addons-603000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-603000]
	I0823 11:12:16.087853    1632 provision.go:172] copyRemoteCerts
	I0823 11:12:16.087919    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0823 11:12:16.087927    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.117611    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0823 11:12:16.124255    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0823 11:12:16.130804    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0823 11:12:16.138250    1632 provision.go:86] duration metric: configureAuth took 143.474625ms
	I0823 11:12:16.138257    1632 buildroot.go:189] setting minikube options for container-runtime
	I0823 11:12:16.138365    1632 config.go:182] Loaded profile config "addons-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:12:16.138406    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.138621    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.138626    1632 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0823 11:12:16.188528    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0823 11:12:16.188533    1632 buildroot.go:70] root file system type: tmpfs
	I0823 11:12:16.188589    1632 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0823 11:12:16.188629    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.188853    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.188886    1632 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0823 11:12:16.244639    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0823 11:12:16.244690    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.244933    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.244942    1632 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0823 11:12:16.607992    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0823 11:12:16.608005    1632 machine.go:91] provisioned docker machine in 720.015167ms
	I0823 11:12:16.608010    1632 client.go:171] LocalClient.Create took 14.696413292s
	I0823 11:12:16.608025    1632 start.go:167] duration metric: libmachine.API.Create for "addons-603000" took 14.696491208s
	I0823 11:12:16.608031    1632 start.go:300] post-start starting for "addons-603000" (driver="qemu2")
	I0823 11:12:16.608036    1632 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0823 11:12:16.608110    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0823 11:12:16.608120    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.635532    1632 ssh_runner.go:195] Run: cat /etc/os-release
	I0823 11:12:16.636917    1632 info.go:137] Remote host: Buildroot 2021.02.12
	I0823 11:12:16.636928    1632 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/addons for local assets ...
	I0823 11:12:16.636998    1632 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/files for local assets ...
	I0823 11:12:16.637031    1632 start.go:303] post-start completed in 28.997291ms
	I0823 11:12:16.637396    1632 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/config.json ...
	I0823 11:12:16.637545    1632 start.go:128] duration metric: createHost completed in 15.077498208s
	I0823 11:12:16.637587    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.637809    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.637813    1632 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0823 11:12:16.687915    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692814336.418860543
	
	I0823 11:12:16.687923    1632 fix.go:206] guest clock: 1692814336.418860543
	I0823 11:12:16.687928    1632 fix.go:219] Guest: 2023-08-23 11:12:16.418860543 -0700 PDT Remote: 2023-08-23 11:12:16.637549 -0700 PDT m=+15.183153626 (delta=-218.688457ms)
	I0823 11:12:16.687939    1632 fix.go:190] guest clock delta is within tolerance: -218.688457ms
	I0823 11:12:16.687945    1632 start.go:83] releasing machines lock for "addons-603000", held for 15.127950375s
	I0823 11:12:16.688248    1632 ssh_runner.go:195] Run: cat /version.json
	I0823 11:12:16.688258    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.688283    1632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0823 11:12:16.688315    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.830956    1632 ssh_runner.go:195] Run: systemctl --version
	I0823 11:12:16.833845    1632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0823 11:12:16.836334    1632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0823 11:12:16.836377    1632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0823 11:12:16.842975    1632 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0823 11:12:16.842984    1632 start.go:466] detecting cgroup driver to use...
	I0823 11:12:16.843114    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:12:16.850298    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0823 11:12:16.854361    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0823 11:12:16.858207    1632 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0823 11:12:16.858231    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0823 11:12:16.861904    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:12:16.865421    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0823 11:12:16.868360    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:12:16.871352    1632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0823 11:12:16.874459    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0823 11:12:16.878023    1632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0823 11:12:16.881145    1632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0823 11:12:16.883828    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:16.960568    1632 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0823 11:12:16.969157    1632 start.go:466] detecting cgroup driver to use...
	I0823 11:12:16.969235    1632 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0823 11:12:16.974606    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:12:16.980054    1632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0823 11:12:16.985654    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:12:16.990412    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 11:12:16.995694    1632 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0823 11:12:17.032758    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 11:12:17.037631    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:12:17.043070    1632 ssh_runner.go:195] Run: which cri-dockerd
	I0823 11:12:17.044366    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0823 11:12:17.046802    1632 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0823 11:12:17.051842    1632 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0823 11:12:17.122781    1632 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0823 11:12:17.203234    1632 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0823 11:12:17.203250    1632 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0823 11:12:17.208465    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:17.289831    1632 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0823 11:12:18.445051    1632 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.15523225s)
	I0823 11:12:18.445131    1632 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0823 11:12:18.516370    1632 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0823 11:12:18.597904    1632 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0823 11:12:18.678212    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:18.761579    1632 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0823 11:12:18.769154    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:18.854594    1632 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0823 11:12:18.877760    1632 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0823 11:12:18.877836    1632 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0823 11:12:18.879918    1632 start.go:534] Will wait 60s for crictl version
	I0823 11:12:18.879971    1632 ssh_runner.go:195] Run: which crictl
	I0823 11:12:18.881335    1632 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0823 11:12:18.895610    1632 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0823 11:12:18.895681    1632 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:12:18.905007    1632 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:12:18.918762    1632 out.go:204] * Preparing Kubernetes v1.28.0 on Docker 24.0.4 ...
	I0823 11:12:18.918902    1632 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0823 11:12:18.920430    1632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 11:12:18.924018    1632 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:12:18.924057    1632 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:12:18.929191    1632 docker.go:636] Got preloaded images: 
	I0823 11:12:18.929199    1632 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.0 wasn't preloaded
	I0823 11:12:18.929238    1632 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0823 11:12:18.932021    1632 ssh_runner.go:195] Run: which lz4
	I0823 11:12:18.933304    1632 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0823 11:12:18.934619    1632 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0823 11:12:18.934635    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356920680 bytes)
	I0823 11:12:20.248127    1632 docker.go:600] Took 1.314887 seconds to copy over tarball
	I0823 11:12:20.248187    1632 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0823 11:12:21.298381    1632 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.050206542s)
	I0823 11:12:21.298396    1632 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0823 11:12:21.314358    1632 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0823 11:12:21.317830    1632 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0823 11:12:21.322906    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:21.401236    1632 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0823 11:12:23.532716    1632 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.131514959s)
	I0823 11:12:23.532818    1632 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:12:23.539039    1632 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0823 11:12:23.539047    1632 cache_images.go:84] Images are preloaded, skipping loading
	I0823 11:12:23.539094    1632 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0823 11:12:23.547073    1632 cni.go:84] Creating CNI manager for ""
	I0823 11:12:23.547082    1632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:12:23.547113    1632 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0823 11:12:23.547124    1632 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-603000 NodeName:addons-603000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0823 11:12:23.547195    1632 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-603000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0823 11:12:23.547230    1632 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-603000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0823 11:12:23.547289    1632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0823 11:12:23.550418    1632 binaries.go:44] Found k8s binaries, skipping transfer
	I0823 11:12:23.550450    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0823 11:12:23.553530    1632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0823 11:12:23.558535    1632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0823 11:12:23.563462    1632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0823 11:12:23.568191    1632 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0823 11:12:23.569452    1632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 11:12:23.573456    1632 certs.go:56] Setting up /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000 for IP: 192.168.105.2
	I0823 11:12:23.573466    1632 certs.go:190] acquiring lock for shared ca certs: {Name:mk376b7368cd0c31aa9f4550a2157cb5219e493f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.573659    1632 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key
	I0823 11:12:23.735121    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt ...
	I0823 11:12:23.735127    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt: {Name:mk63b4dc9bf64742b3a2443ec5bb13fecbad997f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.735357    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key ...
	I0823 11:12:23.735361    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key: {Name:mk69cc2166a61f2546ed0135cfb5fd8f801bd14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.735483    1632 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key
	I0823 11:12:23.796242    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt ...
	I0823 11:12:23.796246    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt: {Name:mk419a1ff00ed72437ca83474e3a88fdfaeeeee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.796377    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key ...
	I0823 11:12:23.796380    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key: {Name:mk4c7f9a48478507429af69af483ca06fbc978e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.796514    1632 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.key
	I0823 11:12:23.796523    1632 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt with IP's: []
	I0823 11:12:24.000999    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt ...
	I0823 11:12:24.001008    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: {Name:mkf571a66a11fe461d7b44eaf6283e5341b1a7f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.001274    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.key ...
	I0823 11:12:24.001277    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.key: {Name:mkcf4093aeeeecec47bfe93d95e186f81025a994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.001388    1632 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969
	I0823 11:12:24.001398    1632 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0823 11:12:24.112607    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969 ...
	I0823 11:12:24.112611    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969: {Name:mkc8690ae4b550cd6392c142a6c0c10c4e6a5520 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.112766    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969 ...
	I0823 11:12:24.112769    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969: {Name:mkce56fce274146ffaf24072ae4ff4b5e0acf555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.112886    1632 certs.go:337] copying /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt
	I0823 11:12:24.113091    1632 certs.go:341] copying /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key
	I0823 11:12:24.113203    1632 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key
	I0823 11:12:24.113221    1632 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt with IP's: []
	I0823 11:12:24.164570    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt ...
	I0823 11:12:24.164574    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt: {Name:mk78bcc5a9a59a81499b830c0df5af65d761e828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.164724    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key ...
	I0823 11:12:24.164727    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key: {Name:mk7a6ea56c8d07401b6d3adda9d302f5989259f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.165026    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem (1675 bytes)
	I0823 11:12:24.165052    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem (1078 bytes)
	I0823 11:12:24.165076    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem (1123 bytes)
	I0823 11:12:24.165112    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem (1679 bytes)
	I0823 11:12:24.165474    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0823 11:12:24.173558    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0823 11:12:24.180163    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0823 11:12:24.186570    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0823 11:12:24.193721    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0823 11:12:24.200859    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0823 11:12:24.207727    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0823 11:12:24.214435    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0823 11:12:24.221806    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0823 11:12:24.228902    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0823 11:12:24.234839    1632 ssh_runner.go:195] Run: openssl version
	I0823 11:12:24.236947    1632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0823 11:12:24.239793    1632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:12:24.241347    1632 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 23 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:12:24.241366    1632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:12:24.243416    1632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0823 11:12:24.246590    1632 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0823 11:12:24.247919    1632 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0823 11:12:24.247965    1632 kubeadm.go:404] StartCluster: {Name:addons-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterNam
e:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:12:24.248029    1632 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0823 11:12:24.253769    1632 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0823 11:12:24.257096    1632 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0823 11:12:24.259889    1632 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0823 11:12:24.262662    1632 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0823 11:12:24.262675    1632 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0823 11:12:24.285841    1632 kubeadm.go:322] [init] Using Kubernetes version: v1.28.0
	I0823 11:12:24.285870    1632 kubeadm.go:322] [preflight] Running pre-flight checks
	I0823 11:12:24.340213    1632 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0823 11:12:24.340268    1632 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0823 11:12:24.340328    1632 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0823 11:12:24.397608    1632 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0823 11:12:24.406812    1632 out.go:204]   - Generating certificates and keys ...
	I0823 11:12:24.406874    1632 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0823 11:12:24.406907    1632 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0823 11:12:24.448173    1632 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0823 11:12:24.501859    1632 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0823 11:12:24.630928    1632 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0823 11:12:24.705637    1632 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0823 11:12:24.853624    1632 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0823 11:12:24.853703    1632 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-603000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0823 11:12:24.984031    1632 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0823 11:12:24.984098    1632 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-603000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0823 11:12:25.055121    1632 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0823 11:12:25.098919    1632 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0823 11:12:25.161126    1632 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0823 11:12:25.161156    1632 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0823 11:12:25.251684    1632 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0823 11:12:25.312453    1632 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0823 11:12:25.380616    1632 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0823 11:12:25.465178    1632 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0823 11:12:25.465413    1632 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0823 11:12:25.466498    1632 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0823 11:12:25.474829    1632 out.go:204]   - Booting up control plane ...
	I0823 11:12:25.474890    1632 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0823 11:12:25.474941    1632 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0823 11:12:25.474973    1632 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0823 11:12:25.475045    1632 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0823 11:12:25.475095    1632 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0823 11:12:25.475114    1632 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0823 11:12:25.557775    1632 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0823 11:12:29.059855    1632 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.502356 seconds
	I0823 11:12:29.059917    1632 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0823 11:12:29.066243    1632 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0823 11:12:29.575724    1632 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0823 11:12:29.575818    1632 kubeadm.go:322] [mark-control-plane] Marking the node addons-603000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0823 11:12:30.082813    1632 kubeadm.go:322] [bootstrap-token] Using token: l2h0hz.d0iyqenbl2749eth
	I0823 11:12:30.096111    1632 out.go:204]   - Configuring RBAC rules ...
	I0823 11:12:30.096178    1632 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0823 11:12:30.096240    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0823 11:12:30.098015    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0823 11:12:30.099101    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0823 11:12:30.100289    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0823 11:12:30.101319    1632 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0823 11:12:30.105824    1632 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0823 11:12:30.274483    1632 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0823 11:12:30.492880    1632 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0823 11:12:30.493261    1632 kubeadm.go:322] 
	I0823 11:12:30.493290    1632 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0823 11:12:30.493295    1632 kubeadm.go:322] 
	I0823 11:12:30.493350    1632 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0823 11:12:30.493355    1632 kubeadm.go:322] 
	I0823 11:12:30.493371    1632 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0823 11:12:30.493409    1632 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0823 11:12:30.493436    1632 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0823 11:12:30.493440    1632 kubeadm.go:322] 
	I0823 11:12:30.493470    1632 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0823 11:12:30.493473    1632 kubeadm.go:322] 
	I0823 11:12:30.493507    1632 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0823 11:12:30.493512    1632 kubeadm.go:322] 
	I0823 11:12:30.493534    1632 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0823 11:12:30.493571    1632 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0823 11:12:30.493608    1632 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0823 11:12:30.493612    1632 kubeadm.go:322] 
	I0823 11:12:30.493652    1632 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0823 11:12:30.493686    1632 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0823 11:12:30.493689    1632 kubeadm.go:322] 
	I0823 11:12:30.493755    1632 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token l2h0hz.d0iyqenbl2749eth \
	I0823 11:12:30.493808    1632 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bfb9bdd14f4bb2775ce7c1686c5d9f5d740ad24201ba0b194f17c1c3700b6f \
	I0823 11:12:30.493820    1632 kubeadm.go:322] 	--control-plane 
	I0823 11:12:30.493824    1632 kubeadm.go:322] 
	I0823 11:12:30.493863    1632 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0823 11:12:30.493867    1632 kubeadm.go:322] 
	I0823 11:12:30.493911    1632 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token l2h0hz.d0iyqenbl2749eth \
	I0823 11:12:30.493962    1632 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bfb9bdd14f4bb2775ce7c1686c5d9f5d740ad24201ba0b194f17c1c3700b6f 
	I0823 11:12:30.494020    1632 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0823 11:12:30.494027    1632 cni.go:84] Creating CNI manager for ""
	I0823 11:12:30.494035    1632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:12:30.502806    1632 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0823 11:12:30.507845    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0823 11:12:30.511861    1632 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0823 11:12:30.516952    1632 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0823 11:12:30.517017    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:30.517023    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=b0cad40b2c48c073112e0845fd1037a030ab0128 minikube.k8s.io/name=addons-603000 minikube.k8s.io/updated_at=2023_08_23T11_12_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:30.588330    1632 ops.go:34] apiserver oom_adj: -16
	I0823 11:12:30.588368    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:30.620791    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:31.158369    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:31.658375    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:32.158344    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:32.658320    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:33.158345    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:33.658343    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:34.158277    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:34.658250    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:35.158228    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:35.658245    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:36.158202    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:36.658170    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:37.156385    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:37.656540    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:38.158165    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:38.658197    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:39.158131    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:39.658110    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:40.158144    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:40.658128    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:41.158110    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:41.658102    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:42.158090    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:42.658123    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:43.158020    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:43.214120    1632 kubeadm.go:1081] duration metric: took 12.6974425s to wait for elevateKubeSystemPrivileges.
	I0823 11:12:43.214131    1632 kubeadm.go:406] StartCluster complete in 18.96661775s
	I0823 11:12:43.214140    1632 settings.go:142] acquiring lock: {Name:mk727aa2e008442bafb3bca409aa11c730e46bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:43.214301    1632 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:12:43.214543    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/kubeconfig: {Name:mk62eedfc40b21b153318958a231856d07330e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:43.214750    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0823 11:12:43.214793    1632 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0823 11:12:43.214848    1632 addons.go:69] Setting volumesnapshots=true in profile "addons-603000"
	I0823 11:12:43.214857    1632 addons.go:231] Setting addon volumesnapshots=true in "addons-603000"
	I0823 11:12:43.214861    1632 addons.go:69] Setting ingress=true in profile "addons-603000"
	I0823 11:12:43.214897    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.214901    1632 addons.go:69] Setting default-storageclass=true in profile "addons-603000"
	I0823 11:12:43.214897    1632 addons.go:231] Setting addon ingress=true in "addons-603000"
	I0823 11:12:43.214917    1632 addons.go:69] Setting cloud-spanner=true in profile "addons-603000"
	I0823 11:12:43.214937    1632 addons.go:69] Setting gcp-auth=true in profile "addons-603000"
	I0823 11:12:43.214943    1632 mustload.go:65] Loading cluster: addons-603000
	I0823 11:12:43.214929    1632 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-603000"
	I0823 11:12:43.214951    1632 addons.go:231] Setting addon cloud-spanner=true in "addons-603000"
	I0823 11:12:43.215174    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215232    1632 addons.go:69] Setting metrics-server=true in profile "addons-603000"
	I0823 11:12:43.215240    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215246    1632 addons.go:231] Setting addon metrics-server=true in "addons-603000"
	I0823 11:12:43.215279    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215384    1632 addons.go:69] Setting registry=true in profile "addons-603000"
	I0823 11:12:43.215406    1632 addons.go:231] Setting addon registry=true in "addons-603000"
	I0823 11:12:43.215406    1632 config.go:182] Loaded profile config "addons-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:12:43.215469    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215573    1632 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-603000"
	I0823 11:12:43.215613    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215623    1632 addons.go:69] Setting ingress-dns=true in profile "addons-603000"
	I0823 11:12:43.215632    1632 addons.go:69] Setting storage-provisioner=true in profile "addons-603000"
	I0823 11:12:43.215635    1632 addons.go:231] Setting addon ingress-dns=true in "addons-603000"
	I0823 11:12:43.215647    1632 addons.go:231] Setting addon storage-provisioner=true in "addons-603000"
	I0823 11:12:43.215715    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215748    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.214927    1632 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-603000"
	I0823 11:12:43.215926    1632 addons.go:69] Setting inspektor-gadget=true in profile "addons-603000"
	I0823 11:12:43.215930    1632 addons.go:231] Setting addon inspektor-gadget=true in "addons-603000"
	I0823 11:12:43.215950    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.216258    1632 config.go:182] Loaded profile config "addons-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	W0823 11:12:43.216285    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.216296    1632 addons.go:277] "addons-603000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0823 11:12:43.216301    1632 addons.go:467] Verifying addon ingress=true in "addons-603000"
	I0823 11:12:43.220697    1632 out.go:177] * Verifying ingress addon...
	W0823 11:12:43.216646    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.216836    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.216925    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217082    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217134    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217426    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217553    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	I0823 11:12:43.218066    1632 host.go:66] Checking if "addons-603000" exists ...
	W0823 11:12:43.229685    1632 addons.go:277] "addons-603000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229693    1632 addons.go:277] "addons-603000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229707    1632 addons_storage_classes.go:55] "addons-603000" is not running, writing default-storageclass=true to disk and skipping enablement
	W0823 11:12:43.229696    1632 addons.go:277] "addons-603000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229727    1632 addons.go:277] "addons-603000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229749    1632 addons.go:277] "addons-603000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0823 11:12:43.230154    1632 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0823 11:12:43.235671    1632 addons.go:277] "addons-603000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0823 11:12:43.235676    1632 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.19.0
	I0823 11:12:43.237659    1632 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-603000" context rescaled to 1 replicas
	I0823 11:12:43.239627    1632 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:12:43.245678    1632 out.go:177] * Verifying Kubernetes components...
	I0823 11:12:43.239670    1632 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0823 11:12:43.239676    1632 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-603000"
	I0823 11:12:43.239682    1632 addons.go:231] Setting addon default-storageclass=true in "addons-603000"
	I0823 11:12:43.239679    1632 addons.go:467] Verifying addon registry=true in "addons-603000"
	I0823 11:12:43.239684    1632 addons.go:467] Verifying addon metrics-server=true in "addons-603000"
	I0823 11:12:43.241522    1632 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0823 11:12:43.257706    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.257792    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 11:12:43.259109    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0823 11:12:43.263610    1632 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0823 11:12:43.264359    1632 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0823 11:12:43.269028    1632 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0823 11:12:43.275582    1632 out.go:177] * Verifying registry addon...
	I0823 11:12:43.281668    1632 out.go:177] * Verifying csi-hostpath-driver addon...
	I0823 11:12:43.281674    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0823 11:12:43.281680    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0823 11:12:43.281692    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0823 11:12:43.288063    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0823 11:12:43.293691    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:43.293700    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:43.293699    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:43.294283    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0823 11:12:43.297149    1632 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0823 11:12:43.298922    1632 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0823 11:12:43.392760    1632 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0823 11:12:43.392773    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0823 11:12:43.396919    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0823 11:12:43.403660    1632 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0823 11:12:43.403671    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0823 11:12:43.426051    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0823 11:12:43.442459    1632 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0823 11:12:43.442469    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0823 11:12:43.465118    1632 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0823 11:12:43.465127    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0823 11:12:43.485410    1632 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0823 11:12:43.485421    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0823 11:12:43.505701    1632 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0823 11:12:43.505712    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0823 11:12:43.510666    1632 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0823 11:12:43.510672    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0823 11:12:43.522915    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0823 11:12:43.857341    1632 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0823 11:12:43.857696    1632 node_ready.go:35] waiting up to 6m0s for node "addons-603000" to be "Ready" ...
	I0823 11:12:43.859423    1632 node_ready.go:49] node "addons-603000" has status "Ready":"True"
	I0823 11:12:43.859431    1632 node_ready.go:38] duration metric: took 1.726334ms waiting for node "addons-603000" to be "Ready" ...
	I0823 11:12:43.859435    1632 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:12:43.862557    1632 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace to be "Ready" ...
	I0823 11:12:45.872773    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:47.873129    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:49.841300    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0823 11:12:49.841316    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:49.876535    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0823 11:12:49.882169    1632 addons.go:231] Setting addon gcp-auth=true in "addons-603000"
	I0823 11:12:49.882191    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:49.882982    1632 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0823 11:12:49.882989    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:49.913095    1632 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0823 11:12:49.916035    1632 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0823 11:12:49.920011    1632 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0823 11:12:49.920017    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0823 11:12:49.927312    1632 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0823 11:12:49.927320    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0823 11:12:49.934731    1632 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0823 11:12:49.934739    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0823 11:12:49.943537    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0823 11:12:50.372936    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:50.550032    1632 addons.go:467] Verifying addon gcp-auth=true in "addons-603000"
	I0823 11:12:50.554005    1632 out.go:177] * Verifying gcp-auth addon...
	I0823 11:12:50.561304    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0823 11:12:50.564772    1632 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0823 11:12:50.564779    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:50.566533    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:51.070598    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:51.570353    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:52.070168    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:52.570325    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:52.873631    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:53.069304    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:53.570391    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:54.068627    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:54.570250    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:55.068767    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:55.371099    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:55.570140    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:56.070737    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:56.570638    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:57.070296    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:57.374140    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:57.570286    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:58.070358    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:58.569814    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:59.070104    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:59.570841    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:59.872601    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:00.070509    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:00.570371    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:01.070409    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:01.569429    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:02.069671    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:02.372554    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:02.571673    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:03.069684    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:03.570027    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:04.070029    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:04.569893    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:04.872229    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:05.069764    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:05.569750    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:06.070699    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:06.569829    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:06.872245    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:07.070111    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:07.569767    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:08.069905    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:08.571644    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:09.069600    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:09.372178    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:09.569699    1632 kapi.go:107] duration metric: took 19.008846917s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0823 11:13:09.574491    1632 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-603000 cluster.
	I0823 11:13:09.578500    1632 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0823 11:13:09.582495    1632 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0823 11:13:11.372160    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:13.871903    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:15.872253    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:18.371986    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:20.372421    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:22.871118    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:23.871704    1632 pod_ready.go:92] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.871715    1632 pod_ready.go:81] duration metric: took 40.010097208s waiting for pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.871719    1632 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.872661    1632 pod_ready.go:97] error getting pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-mfwg5" not found
	I0823 11:13:23.872669    1632 pod_ready.go:81] duration metric: took 947.666µs waiting for pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace to be "Ready" ...
	E0823 11:13:23.872673    1632 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-mfwg5" not found
	I0823 11:13:23.872689    1632 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.875032    1632 pod_ready.go:92] pod "etcd-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.875037    1632 pod_ready.go:81] duration metric: took 2.343625ms waiting for pod "etcd-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.875040    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.877331    1632 pod_ready.go:92] pod "kube-apiserver-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.877336    1632 pod_ready.go:81] duration metric: took 2.293291ms waiting for pod "kube-apiserver-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.877340    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.879623    1632 pod_ready.go:92] pod "kube-controller-manager-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.879629    1632 pod_ready.go:81] duration metric: took 2.287416ms waiting for pod "kube-controller-manager-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.879633    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wxvd5" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.072856    1632 pod_ready.go:92] pod "kube-proxy-wxvd5" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:24.072864    1632 pod_ready.go:81] duration metric: took 193.233541ms waiting for pod "kube-proxy-wxvd5" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.072869    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.472997    1632 pod_ready.go:92] pod "kube-scheduler-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:24.473008    1632 pod_ready.go:81] duration metric: took 400.145667ms waiting for pod "kube-scheduler-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.473012    1632 pod_ready.go:38] duration metric: took 40.614530833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:13:24.473025    1632 api_server.go:52] waiting for apiserver process to appear ...
	I0823 11:13:24.473121    1632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 11:13:24.478605    1632 api_server.go:72] duration metric: took 41.239942375s to wait for apiserver process to appear ...
	I0823 11:13:24.478610    1632 api_server.go:88] waiting for apiserver healthz status ...
	I0823 11:13:24.478617    1632 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0823 11:13:24.482269    1632 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0823 11:13:24.482906    1632 api_server.go:141] control plane version: v1.28.0
	I0823 11:13:24.482912    1632 api_server.go:131] duration metric: took 4.29975ms to wait for apiserver health ...
	I0823 11:13:24.482914    1632 system_pods.go:43] waiting for kube-system pods to appear ...
	I0823 11:13:24.674508    1632 system_pods.go:59] 7 kube-system pods found
	I0823 11:13:24.674519    1632 system_pods.go:61] "coredns-5dd5756b68-blj5d" [1fb66ddb-fafe-4785-b040-d0fd2587b3ae] Running
	I0823 11:13:24.674521    1632 system_pods.go:61] "etcd-addons-603000" [82b2b170-9259-4355-b5e0-2cf094533743] Running
	I0823 11:13:24.674524    1632 system_pods.go:61] "kube-apiserver-addons-603000" [1f3f8b80-c0bb-4e12-80fa-d87f557ae07a] Running
	I0823 11:13:24.674526    1632 system_pods.go:61] "kube-controller-manager-addons-603000" [726ada4c-a3da-4bac-99f2-83c60531a97e] Running
	I0823 11:13:24.674531    1632 system_pods.go:61] "kube-ingress-dns-minikube" [d0fb563e-9f6f-4cbe-b232-2f4956e99b67] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0823 11:13:24.674534    1632 system_pods.go:61] "kube-proxy-wxvd5" [7632c8d9-6979-41bb-a5a1-88f356962d6b] Running
	I0823 11:13:24.674539    1632 system_pods.go:61] "kube-scheduler-addons-603000" [606497e8-7ccf-4730-91d4-a1b0bb5f1c4f] Running
	I0823 11:13:24.674542    1632 system_pods.go:74] duration metric: took 191.630375ms to wait for pod list to return data ...
	I0823 11:13:24.674546    1632 default_sa.go:34] waiting for default service account to be created ...
	I0823 11:13:24.872182    1632 default_sa.go:45] found service account: "default"
	I0823 11:13:24.872197    1632 default_sa.go:55] duration metric: took 197.644583ms for default service account to be created ...
	I0823 11:13:24.872203    1632 system_pods.go:116] waiting for k8s-apps to be running ...
	I0823 11:13:25.073612    1632 system_pods.go:86] 7 kube-system pods found
	I0823 11:13:25.073624    1632 system_pods.go:89] "coredns-5dd5756b68-blj5d" [1fb66ddb-fafe-4785-b040-d0fd2587b3ae] Running
	I0823 11:13:25.073627    1632 system_pods.go:89] "etcd-addons-603000" [82b2b170-9259-4355-b5e0-2cf094533743] Running
	I0823 11:13:25.073629    1632 system_pods.go:89] "kube-apiserver-addons-603000" [1f3f8b80-c0bb-4e12-80fa-d87f557ae07a] Running
	I0823 11:13:25.073632    1632 system_pods.go:89] "kube-controller-manager-addons-603000" [726ada4c-a3da-4bac-99f2-83c60531a97e] Running
	I0823 11:13:25.073636    1632 system_pods.go:89] "kube-ingress-dns-minikube" [d0fb563e-9f6f-4cbe-b232-2f4956e99b67] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0823 11:13:25.073641    1632 system_pods.go:89] "kube-proxy-wxvd5" [7632c8d9-6979-41bb-a5a1-88f356962d6b] Running
	I0823 11:13:25.073644    1632 system_pods.go:89] "kube-scheduler-addons-603000" [606497e8-7ccf-4730-91d4-a1b0bb5f1c4f] Running
	I0823 11:13:25.073647    1632 system_pods.go:126] duration metric: took 201.446458ms to wait for k8s-apps to be running ...
	I0823 11:13:25.073650    1632 system_svc.go:44] waiting for kubelet service to be running ....
	I0823 11:13:25.073715    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 11:13:25.078691    1632 system_svc.go:56] duration metric: took 5.037209ms WaitForService to wait for kubelet.
	I0823 11:13:25.078704    1632 kubeadm.go:581] duration metric: took 41.84005475s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0823 11:13:25.078718    1632 node_conditions.go:102] verifying NodePressure condition ...
	I0823 11:13:25.271033    1632 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0823 11:13:25.271069    1632 node_conditions.go:123] node cpu capacity is 2
	I0823 11:13:25.271075    1632 node_conditions.go:105] duration metric: took 192.359208ms to run NodePressure ...
	I0823 11:13:25.271080    1632 start.go:228] waiting for startup goroutines ...
	I0823 11:18:43.244336    1632 kapi.go:107] duration metric: took 6m0.011535667s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0823 11:18:43.244405    1632 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0823 11:18:43.298298    1632 kapi.go:107] duration metric: took 6m0.001378042s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0823 11:18:43.298326    1632 kapi.go:107] duration metric: took 6m0.00762275s to wait for kubernetes.io/minikube-addons=registry ...
	W0823 11:18:43.298353    1632 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	W0823 11:18:43.298328    1632 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0823 11:18:43.304474    1632 out.go:177] * Enabled addons: volumesnapshots, cloud-spanner, storage-provisioner, metrics-server, ingress-dns, default-storageclass, inspektor-gadget, gcp-auth
	I0823 11:18:43.311559    1632 addons.go:502] enable addons completed in 6m0.094130167s: enabled=[volumesnapshots cloud-spanner storage-provisioner metrics-server ingress-dns default-storageclass inspektor-gadget gcp-auth]
	I0823 11:18:43.311576    1632 start.go:233] waiting for cluster config update ...
	I0823 11:18:43.311585    1632 start.go:242] writing updated cluster config ...
	I0823 11:18:43.312012    1632 ssh_runner.go:195] Run: rm -f paused
	I0823 11:18:43.342628    1632 start.go:600] kubectl: 1.27.2, cluster: 1.28.0 (minor skew: 1)
	I0823 11:18:43.345571    1632 out.go:177] * Done! kubectl is now configured to use "addons-603000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-08-23 18:12:13 UTC, ends at Wed 2023-08-23 18:44:24 UTC. --
	Aug 23 18:33:50 addons-603000 dockerd[1103]: time="2023-08-23T18:33:50.134494826Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.113475782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.113502490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.113681656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.113691239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:39:02 addons-603000 dockerd[1097]: time="2023-08-23T18:39:02.160574422Z" level=info msg="ignoring event" container=77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.160721255Z" level=info msg="shim disconnected" id=77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38 namespace=moby
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.160747005Z" level=warning msg="cleaning up after shim disconnected" id=77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38 namespace=moby
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.160751297Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:43:01 addons-603000 dockerd[1097]: time="2023-08-23T18:43:01.602194723Z" level=info msg="ignoring event" container=db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:43:01 addons-603000 dockerd[1103]: time="2023-08-23T18:43:01.602976683Z" level=info msg="shim disconnected" id=db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23 namespace=moby
	Aug 23 18:43:01 addons-603000 dockerd[1103]: time="2023-08-23T18:43:01.603006392Z" level=warning msg="cleaning up after shim disconnected" id=db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23 namespace=moby
	Aug 23 18:43:01 addons-603000 dockerd[1103]: time="2023-08-23T18:43:01.603077642Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:43:01 addons-603000 dockerd[1103]: time="2023-08-23T18:43:01.627823549Z" level=info msg="shim disconnected" id=12623df0285522cf53d94a7432712282adb20613e1476bb55ab19ed6cc513390 namespace=moby
	Aug 23 18:43:01 addons-603000 dockerd[1103]: time="2023-08-23T18:43:01.627855299Z" level=warning msg="cleaning up after shim disconnected" id=12623df0285522cf53d94a7432712282adb20613e1476bb55ab19ed6cc513390 namespace=moby
	Aug 23 18:43:01 addons-603000 dockerd[1103]: time="2023-08-23T18:43:01.627860883Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:43:01 addons-603000 dockerd[1097]: time="2023-08-23T18:43:01.627995300Z" level=info msg="ignoring event" container=12623df0285522cf53d94a7432712282adb20613e1476bb55ab19ed6cc513390 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:44:09 addons-603000 dockerd[1103]: time="2023-08-23T18:44:09.101032923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:44:09 addons-603000 dockerd[1103]: time="2023-08-23T18:44:09.101077173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:44:09 addons-603000 dockerd[1103]: time="2023-08-23T18:44:09.101088048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:44:09 addons-603000 dockerd[1103]: time="2023-08-23T18:44:09.101094423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:44:09 addons-603000 dockerd[1097]: time="2023-08-23T18:44:09.144961673Z" level=info msg="ignoring event" container=24a9bcc8ece4c7c9f79a915b99d1d85ec3ddb606c2342f5aeb424161d7ac4764 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:44:09 addons-603000 dockerd[1103]: time="2023-08-23T18:44:09.145280591Z" level=info msg="shim disconnected" id=24a9bcc8ece4c7c9f79a915b99d1d85ec3ddb606c2342f5aeb424161d7ac4764 namespace=moby
	Aug 23 18:44:09 addons-603000 dockerd[1103]: time="2023-08-23T18:44:09.145336383Z" level=warning msg="cleaning up after shim disconnected" id=24a9bcc8ece4c7c9f79a915b99d1d85ec3ddb606c2342f5aeb424161d7ac4764 namespace=moby
	Aug 23 18:44:09 addons-603000 dockerd[1103]: time="2023-08-23T18:44:09.145355674Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID
	24a9bcc8ece4c       1499ed4fbd0aa                                                                                                  15 seconds ago      Exited              minikube-ingress-dns      11                  ee55b672623b1
	c3aac1112826c       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98          13 minutes ago      Running             headlamp                  0                   1ce470d3ac5db
	e852eb8754f9d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf   31 minutes ago      Running             gcp-auth                  0                   b3f97bd7715ac
	20ee50fe57df9       940f54a5bcae9                                                                                                  31 minutes ago      Running             kube-proxy                0                   046b3dd276694
	eb22253f397d0       97e04611ad434                                                                                                  31 minutes ago      Running             coredns                   0                   83362738c741a
	695ccc939e29c       762dce4090c5f                                                                                                  31 minutes ago      Running             kube-scheduler            0                   cd7498705ccfc
	0d85c39c58b71       9cdd6470f48c8                                                                                                  31 minutes ago      Running             etcd                      0                   52014f9a89d58
	9700e59743269       46cc66ccc7c19                                                                                                  31 minutes ago      Running             kube-controller-manager   0                   0e19c8d65b488
	594c65a938703       00543d2fe5d71                                                                                                  31 minutes ago      Running             kube-apiserver            0                   e5952ff9dbb6d
	
	* 
	* ==> coredns [eb22253f397d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:48978 - 47852 "HINFO IN 1860146809240780526.8201528298968188454. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004478075s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-603000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-603000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b0cad40b2c48c073112e0845fd1037a030ab0128
	                    minikube.k8s.io/name=addons-603000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_23T11_12_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 23 Aug 2023 18:12:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-603000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 23 Aug 2023 18:44:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 23 Aug 2023 18:41:14 +0000   Wed, 23 Aug 2023 18:12:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 23 Aug 2023 18:41:14 +0000   Wed, 23 Aug 2023 18:12:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 23 Aug 2023 18:41:14 +0000   Wed, 23 Aug 2023 18:12:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 23 Aug 2023 18:41:14 +0000   Wed, 23 Aug 2023 18:12:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-603000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 d563976ae62b43f8a1fb53a177d29cdd
	  System UUID:                d563976ae62b43f8a1fb53a177d29cdd
	  Boot ID:                    95f14d55-f8c9-49e4-bb2e-505c410326e7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-d4c87556c-k5hw7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31m
	  headlamp                    headlamp-699c48fb74-fhmj2                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-blj5d                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     31m
	  kube-system                 etcd-addons-603000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         31m
	  kube-system                 kube-apiserver-addons-603000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31m
	  kube-system                 kube-controller-manager-addons-603000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31m
	  kube-system                 kube-ingress-dns-minikube                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31m
	  kube-system                 kube-proxy-wxvd5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31m
	  kube-system                 kube-scheduler-addons-603000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 31m   kube-proxy       
	  Normal  Starting                 31m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  31m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  31m   kubelet          Node addons-603000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31m   kubelet          Node addons-603000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31m   kubelet          Node addons-603000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                31m   kubelet          Node addons-603000 status is now: NodeReady
	  Normal  RegisteredNode           31m   node-controller  Node addons-603000 event: Registered Node addons-603000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.645801] EINJ: EINJ table not found.
	[  +0.508913] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043174] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000868] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.060149] systemd-fstab-generator[476]: Ignoring "noauto" for root device
	[  +0.082417] systemd-fstab-generator[487]: Ignoring "noauto" for root device
	[  +0.495013] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.161467] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +0.082809] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[  +0.085797] systemd-fstab-generator[726]: Ignoring "noauto" for root device
	[  +1.141175] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.084612] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +0.080569] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +0.082356] systemd-fstab-generator[935]: Ignoring "noauto" for root device
	[  +0.081344] systemd-fstab-generator[946]: Ignoring "noauto" for root device
	[  +0.093418] systemd-fstab-generator[988]: Ignoring "noauto" for root device
	[  +2.547960] systemd-fstab-generator[1090]: Ignoring "noauto" for root device
	[  +2.110352] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.040486] systemd-fstab-generator[1407]: Ignoring "noauto" for root device
	[  +4.639082] systemd-fstab-generator[2281]: Ignoring "noauto" for root device
	[ +13.918576] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.422998] kauditd_printk_skb: 19 callbacks suppressed
	[  +9.996433] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Aug23 18:13] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.103613] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [0d85c39c58b7] <==
	* {"level":"info","ts":"2023-08-23T18:12:26.555101Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:12:26.555137Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:12:26.551069Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-23T18:12:26.555541Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-23T18:12:26.563277Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-23T18:12:26.563307Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-08-23T18:12:48.393724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.007861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-blj5d\" ","response":"range_response_count:1 size:4755"}
	{"level":"info","ts":"2023-08-23T18:12:48.39376Z","caller":"traceutil/trace.go:171","msg":"trace[2046528636] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-blj5d; range_end:; response_count:1; response_revision:423; }","duration":"169.059532ms","start":"2023-08-23T18:12:48.224693Z","end":"2023-08-23T18:12:48.393753Z","steps":["trace[2046528636] 'range keys from in-memory index tree'  (duration: 168.95968ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-23T18:12:48.39392Z","caller":"traceutil/trace.go:171","msg":"trace[720409665] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"100.636151ms","start":"2023-08-23T18:12:48.29328Z","end":"2023-08-23T18:12:48.393917Z","steps":["trace[720409665] 'process raft request'  (duration: 77.612181ms)","trace[720409665] 'compare'  (duration: 22.773829ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-23T18:12:50.777156Z","caller":"traceutil/trace.go:171","msg":"trace[471484672] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"104.054592ms","start":"2023-08-23T18:12:50.673089Z","end":"2023-08-23T18:12:50.777143Z","steps":["trace[471484672] 'process raft request'  (duration: 103.971962ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-23T18:22:26.650207Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":655}
	{"level":"info","ts":"2023-08-23T18:22:26.650931Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":655,"took":"518.817µs","hash":1262660470}
	{"level":"info","ts":"2023-08-23T18:22:26.650949Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1262660470,"revision":655,"compact-revision":-1}
	{"level":"info","ts":"2023-08-23T18:27:26.652533Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":748}
	{"level":"info","ts":"2023-08-23T18:27:26.652937Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":748,"took":"267.17µs","hash":3279716501}
	{"level":"info","ts":"2023-08-23T18:27:26.652952Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3279716501,"revision":748,"compact-revision":655}
	{"level":"info","ts":"2023-08-23T18:32:26.655732Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":842}
	{"level":"info","ts":"2023-08-23T18:32:26.656056Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":842,"took":"197.577µs","hash":1969901087}
	{"level":"info","ts":"2023-08-23T18:32:26.656068Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1969901087,"revision":842,"compact-revision":748}
	{"level":"info","ts":"2023-08-23T18:37:26.658244Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2023-08-23T18:37:26.658629Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":973,"took":"234.457µs","hash":4022607794}
	{"level":"info","ts":"2023-08-23T18:37:26.658641Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4022607794,"revision":973,"compact-revision":842}
	{"level":"info","ts":"2023-08-23T18:42:26.66149Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1071}
	{"level":"info","ts":"2023-08-23T18:42:26.661796Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1071,"took":"188.042µs","hash":2460628194}
	{"level":"info","ts":"2023-08-23T18:42:26.661807Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2460628194,"revision":1071,"compact-revision":973}
	
	* 
	* ==> gcp-auth [e852eb8754f9] <==
	* 2023/08/23 18:13:08 GCP Auth Webhook started!
	2023/08/23 18:13:18 failed to get releases file: Get "https://storage.googleapis.com/minikube-gcp-auth/releases.json": dial tcp: lookup storage.googleapis.com on 10.96.0.10:53: read udp 10.244.0.5:60048->10.96.0.10:53: read: connection refused
	2023/08/23 18:30:44 Ready to marshal response ...
	2023/08/23 18:30:44 Ready to write response ...
	2023/08/23 18:30:44 Ready to marshal response ...
	2023/08/23 18:30:44 Ready to write response ...
	2023/08/23 18:30:44 Ready to marshal response ...
	2023/08/23 18:30:44 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:44:24 up 32 min,  0 users,  load average: 0.03, 0.10, 0.09
	Linux addons-603000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [594c65a93870] <==
	* I0823 18:12:28.528987       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0823 18:12:28.530764       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.2]
	I0823 18:12:28.531188       1 controller.go:624] quota admission added evaluator for: endpoints
	I0823 18:12:28.532349       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0823 18:12:29.394948       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0823 18:12:30.000403       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0823 18:12:30.005168       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0823 18:12:30.011907       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0823 18:12:42.297794       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0823 18:12:42.999693       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0823 18:12:43.740564       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:12:50.388002       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.104.33.188"}
	I0823 18:12:50.406209       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0823 18:17:27.368575       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:22:27.369052       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:27:27.369080       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:30:44.504708       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.202.15"}
	I0823 18:32:27.369968       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:37:27.370986       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:42:27.371798       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:43:01.533841       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:43:01.538319       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0823 18:43:01.547346       1 controller.go:159] removing "v1alpha1.gadget.kinvolk.io" from AggregationController failed with: resource not found
	E0823 18:43:01.547358       1 controller.go:159] removing "v1alpha1.gadget.kinvolk.io" from AggregationController failed with: resource not found
	W0823 18:43:02.544801       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	* 
	* ==> kube-controller-manager [9700e5974326] <==
	* W0823 18:43:03.988873       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0823 18:43:03.988894       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0823 18:43:06.966427       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0823 18:43:06.966446       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0823 18:43:11.562207       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	E0823 18:43:12.309899       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:43:12.309988       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	I0823 18:43:12.714713       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0823 18:43:12.714733       1 shared_informer.go:318] Caches are synced for resource quota
	W0823 18:43:12.909999       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0823 18:43:12.910020       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0823 18:43:12.932226       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0823 18:43:12.932240       1 shared_informer.go:318] Caches are synced for garbage collector
	W0823 18:43:23.726214       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0823 18:43:23.726243       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0823 18:43:27.310392       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:43:27.310496       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:43:42.310854       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:43:42.310904       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	W0823 18:43:48.809103       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0823 18:43:48.809132       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0823 18:43:57.311663       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:43:57.311728       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:44:12.311904       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:44:12.311972       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	
	* 
	* ==> kube-proxy [20ee50fe57df] <==
	* I0823 18:12:44.324386       1 server_others.go:69] "Using iptables proxy"
	I0823 18:12:44.336042       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0823 18:12:44.344520       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0823 18:12:44.344533       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0823 18:12:44.345190       1 server_others.go:152] "Using iptables Proxier"
	I0823 18:12:44.345242       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0823 18:12:44.345409       1 server.go:846] "Version info" version="v1.28.0"
	I0823 18:12:44.345419       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0823 18:12:44.346019       1 config.go:188] "Starting service config controller"
	I0823 18:12:44.346074       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0823 18:12:44.346113       1 config.go:97] "Starting endpoint slice config controller"
	I0823 18:12:44.346120       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0823 18:12:44.346526       1 config.go:315] "Starting node config controller"
	I0823 18:12:44.346554       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0823 18:12:44.446872       1 shared_informer.go:318] Caches are synced for node config
	I0823 18:12:44.446887       1 shared_informer.go:318] Caches are synced for service config
	I0823 18:12:44.446897       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [695ccc939e29] <==
	* W0823 18:12:27.662067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0823 18:12:27.662073       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0823 18:12:27.662103       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662127       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662242       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0823 18:12:27.662264       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0823 18:12:27.662344       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662351       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0823 18:12:27.662399       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0823 18:12:27.662439       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0823 18:12:27.662446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0823 18:12:27.662483       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662491       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662520       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0823 18:12:27.662530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0823 18:12:27.662556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0823 18:12:27.662568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0823 18:12:27.662623       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0823 18:12:27.662630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0823 18:12:27.662674       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0823 18:12:27.662681       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0823 18:12:28.959674       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-23 18:12:13 UTC, ends at Wed 2023-08-23 18:44:24 UTC. --
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.760986    2300 reconciler_common.go:300] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-run\") on node \"addons-603000\" DevicePath \"\""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.760990    2300 reconciler_common.go:300] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/d059753c-0bc7-41e8-8616-dcfd9769bf59-cgroup\") on node \"addons-603000\" DevicePath \"\""
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.866600    2300 scope.go:117] "RemoveContainer" containerID="db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23"
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.875377    2300 scope.go:117] "RemoveContainer" containerID="db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23"
	Aug 23 18:43:01 addons-603000 kubelet[2300]: E0823 18:43:01.875796    2300 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23" containerID="db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23"
	Aug 23 18:43:01 addons-603000 kubelet[2300]: I0823 18:43:01.875827    2300 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23"} err="failed to get container status \"db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23\": rpc error: code = Unknown desc = Error response from daemon: No such container: db988e4e18bfe19fd571452804b97844bdd36c708268025a5102d6ce87ecea23"
	Aug 23 18:43:02 addons-603000 kubelet[2300]: I0823 18:43:02.073538    2300 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d059753c-0bc7-41e8-8616-dcfd9769bf59" path="/var/lib/kubelet/pods/d059753c-0bc7-41e8-8616-dcfd9769bf59/volumes"
	Aug 23 18:43:06 addons-603000 kubelet[2300]: I0823 18:43:06.071662    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:43:06 addons-603000 kubelet[2300]: E0823 18:43:06.072085    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:43:20 addons-603000 kubelet[2300]: I0823 18:43:20.073570    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:43:20 addons-603000 kubelet[2300]: E0823 18:43:20.074003    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:43:30 addons-603000 kubelet[2300]: E0823 18:43:30.076286    2300 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 23 18:43:30 addons-603000 kubelet[2300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 23 18:43:30 addons-603000 kubelet[2300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 23 18:43:30 addons-603000 kubelet[2300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 23 18:43:31 addons-603000 kubelet[2300]: I0823 18:43:31.071131    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:43:31 addons-603000 kubelet[2300]: E0823 18:43:31.071297    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:43:45 addons-603000 kubelet[2300]: I0823 18:43:45.071690    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:43:45 addons-603000 kubelet[2300]: E0823 18:43:45.071806    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:43:58 addons-603000 kubelet[2300]: I0823 18:43:58.071103    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:43:58 addons-603000 kubelet[2300]: E0823 18:43:58.071207    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:44:09 addons-603000 kubelet[2300]: I0823 18:44:09.071453    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:44:10 addons-603000 kubelet[2300]: I0823 18:44:10.148140    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:44:10 addons-603000 kubelet[2300]: I0823 18:44:10.148462    2300 scope.go:117] "RemoveContainer" containerID="24a9bcc8ece4c7c9f79a915b99d1d85ec3ddb606c2342f5aeb424161d7ac4764"
	Aug 23 18:44:10 addons-603000 kubelet[2300]: E0823 18:44:10.148594    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-603000 -n addons-603000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-603000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (720.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (720.84s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:535: failed waiting for csi-hostpath-driver pods to stabilize: context deadline exceeded
addons_test.go:537: csi-hostpath-driver pods stabilized in 6m0.002130916s
addons_test.go:540: (dbg) Run:  kubectl --context addons-603000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-603000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:546: failed waiting for PVC hpvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-603000 -n addons-603000
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-603000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:11 PDT |                     |
	|         | -p download-only-861000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:11 PDT |                     |
	|         | -p download-only-861000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| delete  | -p download-only-861000        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| delete  | -p download-only-861000        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| start   | --download-only -p             | binary-mirror-536000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT |                     |
	|         | binary-mirror-536000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49317         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-536000        | binary-mirror-536000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| start   | -p addons-603000               | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:18 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:30 PDT |                     |
	|         | addons-603000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:30 PDT | 23 Aug 23 11:30 PDT |
	|         | -p addons-603000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/23 11:12:01
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0823 11:12:01.473993    1632 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:12:01.474116    1632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:12:01.474118    1632 out.go:309] Setting ErrFile to fd 2...
	I0823 11:12:01.474121    1632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:12:01.474234    1632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:12:01.475282    1632 out.go:303] Setting JSON to false
	I0823 11:12:01.490493    1632 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":694,"bootTime":1692813627,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:12:01.490568    1632 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:12:01.495699    1632 out.go:177] * [addons-603000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:12:01.502675    1632 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:12:01.502715    1632 notify.go:220] Checking for updates...
	I0823 11:12:01.509745    1632 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:12:01.512689    1632 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:12:01.515810    1632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:12:01.518747    1632 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:12:01.521752    1632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:12:01.524877    1632 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:12:01.528782    1632 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 11:12:01.535731    1632 start.go:298] selected driver: qemu2
	I0823 11:12:01.535736    1632 start.go:902] validating driver "qemu2" against <nil>
	I0823 11:12:01.535742    1632 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:12:01.537642    1632 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 11:12:01.540695    1632 out.go:177] * Automatically selected the socket_vmnet network
	I0823 11:12:01.543724    1632 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 11:12:01.543758    1632 cni.go:84] Creating CNI manager for ""
	I0823 11:12:01.543773    1632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:12:01.543776    1632 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 11:12:01.543783    1632 start_flags.go:319] config:
	{Name:addons-603000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0}
	I0823 11:12:01.547827    1632 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:12:01.555769    1632 out.go:177] * Starting control plane node addons-603000 in cluster addons-603000
	I0823 11:12:01.559683    1632 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:12:01.559703    1632 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 11:12:01.559720    1632 cache.go:57] Caching tarball of preloaded images
	I0823 11:12:01.559801    1632 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 11:12:01.559806    1632 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 11:12:01.560010    1632 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/config.json ...
	I0823 11:12:01.560023    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/config.json: {Name:mkc2981401a387d0fc588c73edb3c2135fa892ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:01.560240    1632 start.go:365] acquiring machines lock for addons-603000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:12:01.560349    1632 start.go:369] acquired machines lock for "addons-603000" in 102.292µs
	I0823 11:12:01.560360    1632 start.go:93] Provisioning new machine with config: &{Name:addons-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.28.0 ClusterName:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:12:01.560401    1632 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:12:01.568742    1632 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0823 11:12:01.911891    1632 start.go:159] libmachine.API.Create for "addons-603000" (driver="qemu2")
	I0823 11:12:01.911940    1632 client.go:168] LocalClient.Create starting
	I0823 11:12:01.912112    1632 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:12:02.016848    1632 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:12:02.196172    1632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:12:02.677712    1632 main.go:141] libmachine: Creating SSH key...
	I0823 11:12:02.710073    1632 main.go:141] libmachine: Creating Disk image...
	I0823 11:12:02.710078    1632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:12:02.710280    1632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2
	I0823 11:12:02.789119    1632 main.go:141] libmachine: STDOUT: 
	I0823 11:12:02.789143    1632 main.go:141] libmachine: STDERR: 
	I0823 11:12:02.789223    1632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2 +20000M
	I0823 11:12:02.796505    1632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:12:02.796516    1632 main.go:141] libmachine: STDERR: 
	I0823 11:12:02.796535    1632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2
	I0823 11:12:02.796546    1632 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:12:02.796585    1632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:3b:71:83:24:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2
	I0823 11:12:02.868731    1632 main.go:141] libmachine: STDOUT: 
	I0823 11:12:02.868763    1632 main.go:141] libmachine: STDERR: 
	I0823 11:12:02.868768    1632 main.go:141] libmachine: Attempt 0
	I0823 11:12:02.868785    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:04.870905    1632 main.go:141] libmachine: Attempt 1
	I0823 11:12:04.871005    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:06.873192    1632 main.go:141] libmachine: Attempt 2
	I0823 11:12:06.873220    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:08.875257    1632 main.go:141] libmachine: Attempt 3
	I0823 11:12:08.875269    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:10.877286    1632 main.go:141] libmachine: Attempt 4
	I0823 11:12:10.877310    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:12.879362    1632 main.go:141] libmachine: Attempt 5
	I0823 11:12:12.879382    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:14.881424    1632 main.go:141] libmachine: Attempt 6
	I0823 11:12:14.881454    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:14.881611    1632 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0823 11:12:14.881699    1632 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e79d7d}
	I0823 11:12:14.881706    1632 main.go:141] libmachine: Found match: da:3b:71:83:24:7a
	I0823 11:12:14.881720    1632 main.go:141] libmachine: IP: 192.168.105.2
	I0823 11:12:14.881729    1632 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0823 11:12:15.887997    1632 machine.go:88] provisioning docker machine ...
	I0823 11:12:15.888014    1632 buildroot.go:166] provisioning hostname "addons-603000"
	I0823 11:12:15.888459    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:15.888726    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:15.888732    1632 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-603000 && echo "addons-603000" | sudo tee /etc/hostname
	I0823 11:12:15.942083    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-603000
	
	I0823 11:12:15.942154    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:15.942421    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:15.942431    1632 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-603000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-603000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-603000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0823 11:12:15.994730    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0823 11:12:15.994741    1632 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17086-1133/.minikube CaCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17086-1133/.minikube}
	I0823 11:12:15.994754    1632 buildroot.go:174] setting up certificates
	I0823 11:12:15.994761    1632 provision.go:83] configureAuth start
	I0823 11:12:15.994765    1632 provision.go:138] copyHostCerts
	I0823 11:12:15.994890    1632 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.pem (1078 bytes)
	I0823 11:12:15.995099    1632 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/cert.pem (1123 bytes)
	I0823 11:12:15.995204    1632 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/key.pem (1679 bytes)
	I0823 11:12:15.995273    1632 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem org=jenkins.addons-603000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-603000]
	I0823 11:12:16.087853    1632 provision.go:172] copyRemoteCerts
	I0823 11:12:16.087919    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0823 11:12:16.087927    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.117611    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0823 11:12:16.124255    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0823 11:12:16.130804    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0823 11:12:16.138250    1632 provision.go:86] duration metric: configureAuth took 143.474625ms
	I0823 11:12:16.138257    1632 buildroot.go:189] setting minikube options for container-runtime
	I0823 11:12:16.138365    1632 config.go:182] Loaded profile config "addons-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:12:16.138406    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.138621    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.138626    1632 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0823 11:12:16.188528    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0823 11:12:16.188533    1632 buildroot.go:70] root file system type: tmpfs
	I0823 11:12:16.188589    1632 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0823 11:12:16.188629    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.188853    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.188886    1632 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0823 11:12:16.244639    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0823 11:12:16.244690    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.244933    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.244942    1632 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0823 11:12:16.607992    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0823 11:12:16.608005    1632 machine.go:91] provisioned docker machine in 720.015167ms
	I0823 11:12:16.608010    1632 client.go:171] LocalClient.Create took 14.696413292s
	I0823 11:12:16.608025    1632 start.go:167] duration metric: libmachine.API.Create for "addons-603000" took 14.696491208s
	I0823 11:12:16.608031    1632 start.go:300] post-start starting for "addons-603000" (driver="qemu2")
	I0823 11:12:16.608036    1632 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0823 11:12:16.608110    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0823 11:12:16.608120    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.635532    1632 ssh_runner.go:195] Run: cat /etc/os-release
	I0823 11:12:16.636917    1632 info.go:137] Remote host: Buildroot 2021.02.12
	I0823 11:12:16.636928    1632 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/addons for local assets ...
	I0823 11:12:16.636998    1632 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/files for local assets ...
	I0823 11:12:16.637031    1632 start.go:303] post-start completed in 28.997291ms
	I0823 11:12:16.637396    1632 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/config.json ...
	I0823 11:12:16.637545    1632 start.go:128] duration metric: createHost completed in 15.077498208s
	I0823 11:12:16.637587    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.637809    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.637813    1632 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0823 11:12:16.687915    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692814336.418860543
	
	I0823 11:12:16.687923    1632 fix.go:206] guest clock: 1692814336.418860543
	I0823 11:12:16.687928    1632 fix.go:219] Guest: 2023-08-23 11:12:16.418860543 -0700 PDT Remote: 2023-08-23 11:12:16.637549 -0700 PDT m=+15.183153626 (delta=-218.688457ms)
	I0823 11:12:16.687939    1632 fix.go:190] guest clock delta is within tolerance: -218.688457ms
	I0823 11:12:16.687945    1632 start.go:83] releasing machines lock for "addons-603000", held for 15.127950375s
	I0823 11:12:16.688248    1632 ssh_runner.go:195] Run: cat /version.json
	I0823 11:12:16.688258    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.688283    1632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0823 11:12:16.688315    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.830956    1632 ssh_runner.go:195] Run: systemctl --version
	I0823 11:12:16.833845    1632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0823 11:12:16.836334    1632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0823 11:12:16.836377    1632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0823 11:12:16.842975    1632 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0823 11:12:16.842984    1632 start.go:466] detecting cgroup driver to use...
	I0823 11:12:16.843114    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:12:16.850298    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0823 11:12:16.854361    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0823 11:12:16.858207    1632 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0823 11:12:16.858231    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0823 11:12:16.861904    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:12:16.865421    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0823 11:12:16.868360    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:12:16.871352    1632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0823 11:12:16.874459    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0823 11:12:16.878023    1632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0823 11:12:16.881145    1632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0823 11:12:16.883828    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:16.960568    1632 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0823 11:12:16.969157    1632 start.go:466] detecting cgroup driver to use...
	I0823 11:12:16.969235    1632 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0823 11:12:16.974606    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:12:16.980054    1632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0823 11:12:16.985654    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:12:16.990412    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 11:12:16.995694    1632 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0823 11:12:17.032758    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 11:12:17.037631    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:12:17.043070    1632 ssh_runner.go:195] Run: which cri-dockerd
	I0823 11:12:17.044366    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0823 11:12:17.046802    1632 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0823 11:12:17.051842    1632 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0823 11:12:17.122781    1632 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0823 11:12:17.203234    1632 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0823 11:12:17.203250    1632 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0823 11:12:17.208465    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:17.289831    1632 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0823 11:12:18.445051    1632 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.15523225s)
	I0823 11:12:18.445131    1632 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0823 11:12:18.516370    1632 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0823 11:12:18.597904    1632 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0823 11:12:18.678212    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:18.761579    1632 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0823 11:12:18.769154    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:18.854594    1632 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0823 11:12:18.877760    1632 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0823 11:12:18.877836    1632 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0823 11:12:18.879918    1632 start.go:534] Will wait 60s for crictl version
	I0823 11:12:18.879971    1632 ssh_runner.go:195] Run: which crictl
	I0823 11:12:18.881335    1632 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0823 11:12:18.895610    1632 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0823 11:12:18.895681    1632 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:12:18.905007    1632 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:12:18.918762    1632 out.go:204] * Preparing Kubernetes v1.28.0 on Docker 24.0.4 ...
	I0823 11:12:18.918902    1632 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0823 11:12:18.920430    1632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 11:12:18.924018    1632 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:12:18.924057    1632 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:12:18.929191    1632 docker.go:636] Got preloaded images: 
	I0823 11:12:18.929199    1632 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.0 wasn't preloaded
	I0823 11:12:18.929238    1632 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0823 11:12:18.932021    1632 ssh_runner.go:195] Run: which lz4
	I0823 11:12:18.933304    1632 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0823 11:12:18.934619    1632 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0823 11:12:18.934635    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356920680 bytes)
	I0823 11:12:20.248127    1632 docker.go:600] Took 1.314887 seconds to copy over tarball
	I0823 11:12:20.248187    1632 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0823 11:12:21.298381    1632 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.050206542s)
	I0823 11:12:21.298396    1632 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0823 11:12:21.314358    1632 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0823 11:12:21.317830    1632 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0823 11:12:21.322906    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:21.401236    1632 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0823 11:12:23.532716    1632 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.131514959s)
	I0823 11:12:23.532818    1632 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:12:23.539039    1632 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0823 11:12:23.539047    1632 cache_images.go:84] Images are preloaded, skipping loading
	I0823 11:12:23.539094    1632 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0823 11:12:23.547073    1632 cni.go:84] Creating CNI manager for ""
	I0823 11:12:23.547082    1632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:12:23.547113    1632 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0823 11:12:23.547124    1632 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-603000 NodeName:addons-603000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0823 11:12:23.547195    1632 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-603000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0823 11:12:23.547230    1632 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-603000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0823 11:12:23.547289    1632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0823 11:12:23.550418    1632 binaries.go:44] Found k8s binaries, skipping transfer
	I0823 11:12:23.550450    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0823 11:12:23.553530    1632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0823 11:12:23.558535    1632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0823 11:12:23.563462    1632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0823 11:12:23.568191    1632 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0823 11:12:23.569452    1632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 11:12:23.573456    1632 certs.go:56] Setting up /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000 for IP: 192.168.105.2
	I0823 11:12:23.573466    1632 certs.go:190] acquiring lock for shared ca certs: {Name:mk376b7368cd0c31aa9f4550a2157cb5219e493f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.573659    1632 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key
	I0823 11:12:23.735121    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt ...
	I0823 11:12:23.735127    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt: {Name:mk63b4dc9bf64742b3a2443ec5bb13fecbad997f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.735357    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key ...
	I0823 11:12:23.735361    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key: {Name:mk69cc2166a61f2546ed0135cfb5fd8f801bd14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.735483    1632 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key
	I0823 11:12:23.796242    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt ...
	I0823 11:12:23.796246    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt: {Name:mk419a1ff00ed72437ca83474e3a88fdfaeeeee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.796377    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key ...
	I0823 11:12:23.796380    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key: {Name:mk4c7f9a48478507429af69af483ca06fbc978e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.796514    1632 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.key
	I0823 11:12:23.796523    1632 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt with IP's: []
	I0823 11:12:24.000999    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt ...
	I0823 11:12:24.001008    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: {Name:mkf571a66a11fe461d7b44eaf6283e5341b1a7f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.001274    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.key ...
	I0823 11:12:24.001277    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.key: {Name:mkcf4093aeeeecec47bfe93d95e186f81025a994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.001388    1632 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969
	I0823 11:12:24.001398    1632 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0823 11:12:24.112607    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969 ...
	I0823 11:12:24.112611    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969: {Name:mkc8690ae4b550cd6392c142a6c0c10c4e6a5520 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.112766    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969 ...
	I0823 11:12:24.112769    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969: {Name:mkce56fce274146ffaf24072ae4ff4b5e0acf555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.112886    1632 certs.go:337] copying /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt
	I0823 11:12:24.113091    1632 certs.go:341] copying /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key
	I0823 11:12:24.113203    1632 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key
	I0823 11:12:24.113221    1632 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt with IP's: []
	I0823 11:12:24.164570    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt ...
	I0823 11:12:24.164574    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt: {Name:mk78bcc5a9a59a81499b830c0df5af65d761e828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.164724    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key ...
	I0823 11:12:24.164727    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key: {Name:mk7a6ea56c8d07401b6d3adda9d302f5989259f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.165026    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem (1675 bytes)
	I0823 11:12:24.165052    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem (1078 bytes)
	I0823 11:12:24.165076    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem (1123 bytes)
	I0823 11:12:24.165112    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem (1679 bytes)
	I0823 11:12:24.165474    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0823 11:12:24.173558    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0823 11:12:24.180163    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0823 11:12:24.186570    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0823 11:12:24.193721    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0823 11:12:24.200859    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0823 11:12:24.207727    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0823 11:12:24.214435    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0823 11:12:24.221806    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0823 11:12:24.228902    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0823 11:12:24.234839    1632 ssh_runner.go:195] Run: openssl version
	I0823 11:12:24.236947    1632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0823 11:12:24.239793    1632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:12:24.241347    1632 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 23 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:12:24.241366    1632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:12:24.243416    1632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0823 11:12:24.246590    1632 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0823 11:12:24.247919    1632 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0823 11:12:24.247965    1632 kubeadm.go:404] StartCluster: {Name:addons-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterNam
e:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:12:24.248029    1632 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0823 11:12:24.253769    1632 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0823 11:12:24.257096    1632 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0823 11:12:24.259889    1632 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0823 11:12:24.262662    1632 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0823 11:12:24.262675    1632 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0823 11:12:24.285841    1632 kubeadm.go:322] [init] Using Kubernetes version: v1.28.0
	I0823 11:12:24.285870    1632 kubeadm.go:322] [preflight] Running pre-flight checks
	I0823 11:12:24.340213    1632 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0823 11:12:24.340268    1632 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0823 11:12:24.340328    1632 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0823 11:12:24.397608    1632 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0823 11:12:24.406812    1632 out.go:204]   - Generating certificates and keys ...
	I0823 11:12:24.406874    1632 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0823 11:12:24.406907    1632 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0823 11:12:24.448173    1632 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0823 11:12:24.501859    1632 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0823 11:12:24.630928    1632 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0823 11:12:24.705637    1632 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0823 11:12:24.853624    1632 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0823 11:12:24.853703    1632 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-603000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0823 11:12:24.984031    1632 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0823 11:12:24.984098    1632 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-603000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0823 11:12:25.055121    1632 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0823 11:12:25.098919    1632 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0823 11:12:25.161126    1632 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0823 11:12:25.161156    1632 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0823 11:12:25.251684    1632 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0823 11:12:25.312453    1632 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0823 11:12:25.380616    1632 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0823 11:12:25.465178    1632 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0823 11:12:25.465413    1632 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0823 11:12:25.466498    1632 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0823 11:12:25.474829    1632 out.go:204]   - Booting up control plane ...
	I0823 11:12:25.474890    1632 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0823 11:12:25.474941    1632 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0823 11:12:25.474973    1632 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0823 11:12:25.475045    1632 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0823 11:12:25.475095    1632 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0823 11:12:25.475114    1632 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0823 11:12:25.557775    1632 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0823 11:12:29.059855    1632 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.502356 seconds
	I0823 11:12:29.059917    1632 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0823 11:12:29.066243    1632 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0823 11:12:29.575724    1632 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0823 11:12:29.575818    1632 kubeadm.go:322] [mark-control-plane] Marking the node addons-603000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0823 11:12:30.082813    1632 kubeadm.go:322] [bootstrap-token] Using token: l2h0hz.d0iyqenbl2749eth
	I0823 11:12:30.096111    1632 out.go:204]   - Configuring RBAC rules ...
	I0823 11:12:30.096178    1632 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0823 11:12:30.096240    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0823 11:12:30.098015    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0823 11:12:30.099101    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0823 11:12:30.100289    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0823 11:12:30.101319    1632 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0823 11:12:30.105824    1632 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0823 11:12:30.274483    1632 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0823 11:12:30.492880    1632 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0823 11:12:30.493261    1632 kubeadm.go:322] 
	I0823 11:12:30.493290    1632 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0823 11:12:30.493295    1632 kubeadm.go:322] 
	I0823 11:12:30.493350    1632 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0823 11:12:30.493355    1632 kubeadm.go:322] 
	I0823 11:12:30.493371    1632 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0823 11:12:30.493409    1632 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0823 11:12:30.493436    1632 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0823 11:12:30.493440    1632 kubeadm.go:322] 
	I0823 11:12:30.493470    1632 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0823 11:12:30.493473    1632 kubeadm.go:322] 
	I0823 11:12:30.493507    1632 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0823 11:12:30.493512    1632 kubeadm.go:322] 
	I0823 11:12:30.493534    1632 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0823 11:12:30.493571    1632 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0823 11:12:30.493608    1632 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0823 11:12:30.493612    1632 kubeadm.go:322] 
	I0823 11:12:30.493652    1632 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0823 11:12:30.493686    1632 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0823 11:12:30.493689    1632 kubeadm.go:322] 
	I0823 11:12:30.493755    1632 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token l2h0hz.d0iyqenbl2749eth \
	I0823 11:12:30.493808    1632 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bfb9bdd14f4bb2775ce7c1686c5d9f5d740ad24201ba0b194f17c1c3700b6f \
	I0823 11:12:30.493820    1632 kubeadm.go:322] 	--control-plane 
	I0823 11:12:30.493824    1632 kubeadm.go:322] 
	I0823 11:12:30.493863    1632 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0823 11:12:30.493867    1632 kubeadm.go:322] 
	I0823 11:12:30.493911    1632 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token l2h0hz.d0iyqenbl2749eth \
	I0823 11:12:30.493962    1632 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bfb9bdd14f4bb2775ce7c1686c5d9f5d740ad24201ba0b194f17c1c3700b6f 
	I0823 11:12:30.494020    1632 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0823 11:12:30.494027    1632 cni.go:84] Creating CNI manager for ""
	I0823 11:12:30.494035    1632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:12:30.502806    1632 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0823 11:12:30.507845    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0823 11:12:30.511861    1632 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0823 11:12:30.516952    1632 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0823 11:12:30.517017    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:30.517023    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=b0cad40b2c48c073112e0845fd1037a030ab0128 minikube.k8s.io/name=addons-603000 minikube.k8s.io/updated_at=2023_08_23T11_12_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:30.588330    1632 ops.go:34] apiserver oom_adj: -16
	I0823 11:12:30.588368    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:30.620791    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:31.158369    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:31.658375    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:32.158344    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:32.658320    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:33.158345    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:33.658343    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:34.158277    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:34.658250    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:35.158228    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:35.658245    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:36.158202    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:36.658170    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:37.156385    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:37.656540    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:38.158165    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:38.658197    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:39.158131    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:39.658110    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:40.158144    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:40.658128    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:41.158110    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:41.658102    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:42.158090    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:42.658123    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:43.158020    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:43.214120    1632 kubeadm.go:1081] duration metric: took 12.6974425s to wait for elevateKubeSystemPrivileges.
	I0823 11:12:43.214131    1632 kubeadm.go:406] StartCluster complete in 18.96661775s
	I0823 11:12:43.214140    1632 settings.go:142] acquiring lock: {Name:mk727aa2e008442bafb3bca409aa11c730e46bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:43.214301    1632 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:12:43.214543    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/kubeconfig: {Name:mk62eedfc40b21b153318958a231856d07330e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:43.214750    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0823 11:12:43.214793    1632 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0823 11:12:43.214848    1632 addons.go:69] Setting volumesnapshots=true in profile "addons-603000"
	I0823 11:12:43.214857    1632 addons.go:231] Setting addon volumesnapshots=true in "addons-603000"
	I0823 11:12:43.214861    1632 addons.go:69] Setting ingress=true in profile "addons-603000"
	I0823 11:12:43.214897    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.214901    1632 addons.go:69] Setting default-storageclass=true in profile "addons-603000"
	I0823 11:12:43.214897    1632 addons.go:231] Setting addon ingress=true in "addons-603000"
	I0823 11:12:43.214917    1632 addons.go:69] Setting cloud-spanner=true in profile "addons-603000"
	I0823 11:12:43.214937    1632 addons.go:69] Setting gcp-auth=true in profile "addons-603000"
	I0823 11:12:43.214943    1632 mustload.go:65] Loading cluster: addons-603000
	I0823 11:12:43.214929    1632 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-603000"
	I0823 11:12:43.214951    1632 addons.go:231] Setting addon cloud-spanner=true in "addons-603000"
	I0823 11:12:43.215174    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215232    1632 addons.go:69] Setting metrics-server=true in profile "addons-603000"
	I0823 11:12:43.215240    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215246    1632 addons.go:231] Setting addon metrics-server=true in "addons-603000"
	I0823 11:12:43.215279    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215384    1632 addons.go:69] Setting registry=true in profile "addons-603000"
	I0823 11:12:43.215406    1632 addons.go:231] Setting addon registry=true in "addons-603000"
	I0823 11:12:43.215406    1632 config.go:182] Loaded profile config "addons-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:12:43.215469    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215573    1632 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-603000"
	I0823 11:12:43.215613    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215623    1632 addons.go:69] Setting ingress-dns=true in profile "addons-603000"
	I0823 11:12:43.215632    1632 addons.go:69] Setting storage-provisioner=true in profile "addons-603000"
	I0823 11:12:43.215635    1632 addons.go:231] Setting addon ingress-dns=true in "addons-603000"
	I0823 11:12:43.215647    1632 addons.go:231] Setting addon storage-provisioner=true in "addons-603000"
	I0823 11:12:43.215715    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215748    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.214927    1632 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-603000"
	I0823 11:12:43.215926    1632 addons.go:69] Setting inspektor-gadget=true in profile "addons-603000"
	I0823 11:12:43.215930    1632 addons.go:231] Setting addon inspektor-gadget=true in "addons-603000"
	I0823 11:12:43.215950    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.216258    1632 config.go:182] Loaded profile config "addons-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	W0823 11:12:43.216285    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.216296    1632 addons.go:277] "addons-603000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0823 11:12:43.216301    1632 addons.go:467] Verifying addon ingress=true in "addons-603000"
	I0823 11:12:43.220697    1632 out.go:177] * Verifying ingress addon...
	W0823 11:12:43.216646    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.216836    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.216925    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217082    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217134    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217426    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217553    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	I0823 11:12:43.218066    1632 host.go:66] Checking if "addons-603000" exists ...
	W0823 11:12:43.229685    1632 addons.go:277] "addons-603000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229693    1632 addons.go:277] "addons-603000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229707    1632 addons_storage_classes.go:55] "addons-603000" is not running, writing default-storageclass=true to disk and skipping enablement
	W0823 11:12:43.229696    1632 addons.go:277] "addons-603000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229727    1632 addons.go:277] "addons-603000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229749    1632 addons.go:277] "addons-603000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0823 11:12:43.230154    1632 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0823 11:12:43.235671    1632 addons.go:277] "addons-603000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0823 11:12:43.235676    1632 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.19.0
	I0823 11:12:43.237659    1632 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-603000" context rescaled to 1 replicas
	I0823 11:12:43.239627    1632 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:12:43.245678    1632 out.go:177] * Verifying Kubernetes components...
	I0823 11:12:43.239670    1632 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0823 11:12:43.239676    1632 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-603000"
	I0823 11:12:43.239682    1632 addons.go:231] Setting addon default-storageclass=true in "addons-603000"
	I0823 11:12:43.239679    1632 addons.go:467] Verifying addon registry=true in "addons-603000"
	I0823 11:12:43.239684    1632 addons.go:467] Verifying addon metrics-server=true in "addons-603000"
	I0823 11:12:43.241522    1632 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0823 11:12:43.257706    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.257792    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 11:12:43.259109    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0823 11:12:43.263610    1632 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0823 11:12:43.264359    1632 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0823 11:12:43.269028    1632 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0823 11:12:43.275582    1632 out.go:177] * Verifying registry addon...
	I0823 11:12:43.281668    1632 out.go:177] * Verifying csi-hostpath-driver addon...
	I0823 11:12:43.281674    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0823 11:12:43.281680    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0823 11:12:43.281692    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0823 11:12:43.288063    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0823 11:12:43.293691    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:43.293700    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:43.293699    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:43.294283    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0823 11:12:43.297149    1632 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0823 11:12:43.298922    1632 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0823 11:12:43.392760    1632 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0823 11:12:43.392773    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0823 11:12:43.396919    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0823 11:12:43.403660    1632 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0823 11:12:43.403671    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0823 11:12:43.426051    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0823 11:12:43.442459    1632 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0823 11:12:43.442469    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0823 11:12:43.465118    1632 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0823 11:12:43.465127    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0823 11:12:43.485410    1632 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0823 11:12:43.485421    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0823 11:12:43.505701    1632 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0823 11:12:43.505712    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0823 11:12:43.510666    1632 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0823 11:12:43.510672    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0823 11:12:43.522915    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0823 11:12:43.857341    1632 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0823 11:12:43.857696    1632 node_ready.go:35] waiting up to 6m0s for node "addons-603000" to be "Ready" ...
	I0823 11:12:43.859423    1632 node_ready.go:49] node "addons-603000" has status "Ready":"True"
	I0823 11:12:43.859431    1632 node_ready.go:38] duration metric: took 1.726334ms waiting for node "addons-603000" to be "Ready" ...
	I0823 11:12:43.859435    1632 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:12:43.862557    1632 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace to be "Ready" ...
	I0823 11:12:45.872773    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:47.873129    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:49.841300    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0823 11:12:49.841316    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:49.876535    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0823 11:12:49.882169    1632 addons.go:231] Setting addon gcp-auth=true in "addons-603000"
	I0823 11:12:49.882191    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:49.882982    1632 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0823 11:12:49.882989    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:49.913095    1632 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0823 11:12:49.916035    1632 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0823 11:12:49.920011    1632 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0823 11:12:49.920017    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0823 11:12:49.927312    1632 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0823 11:12:49.927320    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0823 11:12:49.934731    1632 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0823 11:12:49.934739    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0823 11:12:49.943537    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0823 11:12:50.372936    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:50.550032    1632 addons.go:467] Verifying addon gcp-auth=true in "addons-603000"
	I0823 11:12:50.554005    1632 out.go:177] * Verifying gcp-auth addon...
	I0823 11:12:50.561304    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0823 11:12:50.564772    1632 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0823 11:12:50.564779    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:50.566533    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:51.070598    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:51.570353    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:52.070168    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:52.570325    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:52.873631    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:53.069304    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:53.570391    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:54.068627    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:54.570250    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:55.068767    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:55.371099    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:55.570140    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:56.070737    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:56.570638    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:57.070296    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:57.374140    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:57.570286    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:58.070358    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:58.569814    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:59.070104    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:59.570841    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:59.872601    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:00.070509    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:00.570371    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:01.070409    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:01.569429    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:02.069671    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:02.372554    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:02.571673    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:03.069684    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:03.570027    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:04.070029    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:04.569893    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:04.872229    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:05.069764    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:05.569750    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:06.070699    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:06.569829    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:06.872245    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:07.070111    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:07.569767    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:08.069905    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:08.571644    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:09.069600    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:09.372178    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:09.569699    1632 kapi.go:107] duration metric: took 19.008846917s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0823 11:13:09.574491    1632 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-603000 cluster.
	I0823 11:13:09.578500    1632 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0823 11:13:09.582495    1632 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0823 11:13:11.372160    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:13.871903    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:15.872253    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:18.371986    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:20.372421    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:22.871118    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:23.871704    1632 pod_ready.go:92] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.871715    1632 pod_ready.go:81] duration metric: took 40.010097208s waiting for pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.871719    1632 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.872661    1632 pod_ready.go:97] error getting pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-mfwg5" not found
	I0823 11:13:23.872669    1632 pod_ready.go:81] duration metric: took 947.666µs waiting for pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace to be "Ready" ...
	E0823 11:13:23.872673    1632 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-mfwg5" not found
	I0823 11:13:23.872689    1632 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.875032    1632 pod_ready.go:92] pod "etcd-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.875037    1632 pod_ready.go:81] duration metric: took 2.343625ms waiting for pod "etcd-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.875040    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.877331    1632 pod_ready.go:92] pod "kube-apiserver-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.877336    1632 pod_ready.go:81] duration metric: took 2.293291ms waiting for pod "kube-apiserver-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.877340    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.879623    1632 pod_ready.go:92] pod "kube-controller-manager-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.879629    1632 pod_ready.go:81] duration metric: took 2.287416ms waiting for pod "kube-controller-manager-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.879633    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wxvd5" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.072856    1632 pod_ready.go:92] pod "kube-proxy-wxvd5" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:24.072864    1632 pod_ready.go:81] duration metric: took 193.233541ms waiting for pod "kube-proxy-wxvd5" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.072869    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.472997    1632 pod_ready.go:92] pod "kube-scheduler-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:24.473008    1632 pod_ready.go:81] duration metric: took 400.145667ms waiting for pod "kube-scheduler-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.473012    1632 pod_ready.go:38] duration metric: took 40.614530833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:13:24.473025    1632 api_server.go:52] waiting for apiserver process to appear ...
	I0823 11:13:24.473121    1632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 11:13:24.478605    1632 api_server.go:72] duration metric: took 41.239942375s to wait for apiserver process to appear ...
	I0823 11:13:24.478610    1632 api_server.go:88] waiting for apiserver healthz status ...
	I0823 11:13:24.478617    1632 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0823 11:13:24.482269    1632 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0823 11:13:24.482906    1632 api_server.go:141] control plane version: v1.28.0
	I0823 11:13:24.482912    1632 api_server.go:131] duration metric: took 4.29975ms to wait for apiserver health ...
	I0823 11:13:24.482914    1632 system_pods.go:43] waiting for kube-system pods to appear ...
	I0823 11:13:24.674508    1632 system_pods.go:59] 7 kube-system pods found
	I0823 11:13:24.674519    1632 system_pods.go:61] "coredns-5dd5756b68-blj5d" [1fb66ddb-fafe-4785-b040-d0fd2587b3ae] Running
	I0823 11:13:24.674521    1632 system_pods.go:61] "etcd-addons-603000" [82b2b170-9259-4355-b5e0-2cf094533743] Running
	I0823 11:13:24.674524    1632 system_pods.go:61] "kube-apiserver-addons-603000" [1f3f8b80-c0bb-4e12-80fa-d87f557ae07a] Running
	I0823 11:13:24.674526    1632 system_pods.go:61] "kube-controller-manager-addons-603000" [726ada4c-a3da-4bac-99f2-83c60531a97e] Running
	I0823 11:13:24.674531    1632 system_pods.go:61] "kube-ingress-dns-minikube" [d0fb563e-9f6f-4cbe-b232-2f4956e99b67] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0823 11:13:24.674534    1632 system_pods.go:61] "kube-proxy-wxvd5" [7632c8d9-6979-41bb-a5a1-88f356962d6b] Running
	I0823 11:13:24.674539    1632 system_pods.go:61] "kube-scheduler-addons-603000" [606497e8-7ccf-4730-91d4-a1b0bb5f1c4f] Running
	I0823 11:13:24.674542    1632 system_pods.go:74] duration metric: took 191.630375ms to wait for pod list to return data ...
	I0823 11:13:24.674546    1632 default_sa.go:34] waiting for default service account to be created ...
	I0823 11:13:24.872182    1632 default_sa.go:45] found service account: "default"
	I0823 11:13:24.872197    1632 default_sa.go:55] duration metric: took 197.644583ms for default service account to be created ...
	I0823 11:13:24.872203    1632 system_pods.go:116] waiting for k8s-apps to be running ...
	I0823 11:13:25.073612    1632 system_pods.go:86] 7 kube-system pods found
	I0823 11:13:25.073624    1632 system_pods.go:89] "coredns-5dd5756b68-blj5d" [1fb66ddb-fafe-4785-b040-d0fd2587b3ae] Running
	I0823 11:13:25.073627    1632 system_pods.go:89] "etcd-addons-603000" [82b2b170-9259-4355-b5e0-2cf094533743] Running
	I0823 11:13:25.073629    1632 system_pods.go:89] "kube-apiserver-addons-603000" [1f3f8b80-c0bb-4e12-80fa-d87f557ae07a] Running
	I0823 11:13:25.073632    1632 system_pods.go:89] "kube-controller-manager-addons-603000" [726ada4c-a3da-4bac-99f2-83c60531a97e] Running
	I0823 11:13:25.073636    1632 system_pods.go:89] "kube-ingress-dns-minikube" [d0fb563e-9f6f-4cbe-b232-2f4956e99b67] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0823 11:13:25.073641    1632 system_pods.go:89] "kube-proxy-wxvd5" [7632c8d9-6979-41bb-a5a1-88f356962d6b] Running
	I0823 11:13:25.073644    1632 system_pods.go:89] "kube-scheduler-addons-603000" [606497e8-7ccf-4730-91d4-a1b0bb5f1c4f] Running
	I0823 11:13:25.073647    1632 system_pods.go:126] duration metric: took 201.446458ms to wait for k8s-apps to be running ...
	I0823 11:13:25.073650    1632 system_svc.go:44] waiting for kubelet service to be running ....
	I0823 11:13:25.073715    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 11:13:25.078691    1632 system_svc.go:56] duration metric: took 5.037209ms WaitForService to wait for kubelet.
	I0823 11:13:25.078704    1632 kubeadm.go:581] duration metric: took 41.84005475s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0823 11:13:25.078718    1632 node_conditions.go:102] verifying NodePressure condition ...
	I0823 11:13:25.271033    1632 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0823 11:13:25.271069    1632 node_conditions.go:123] node cpu capacity is 2
	I0823 11:13:25.271075    1632 node_conditions.go:105] duration metric: took 192.359208ms to run NodePressure ...
	I0823 11:13:25.271080    1632 start.go:228] waiting for startup goroutines ...
	I0823 11:18:43.244336    1632 kapi.go:107] duration metric: took 6m0.011535667s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0823 11:18:43.244405    1632 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0823 11:18:43.298298    1632 kapi.go:107] duration metric: took 6m0.001378042s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0823 11:18:43.298326    1632 kapi.go:107] duration metric: took 6m0.00762275s to wait for kubernetes.io/minikube-addons=registry ...
	W0823 11:18:43.298353    1632 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	W0823 11:18:43.298328    1632 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0823 11:18:43.304474    1632 out.go:177] * Enabled addons: volumesnapshots, cloud-spanner, storage-provisioner, metrics-server, ingress-dns, default-storageclass, inspektor-gadget, gcp-auth
	I0823 11:18:43.311559    1632 addons.go:502] enable addons completed in 6m0.094130167s: enabled=[volumesnapshots cloud-spanner storage-provisioner metrics-server ingress-dns default-storageclass inspektor-gadget gcp-auth]
	I0823 11:18:43.311576    1632 start.go:233] waiting for cluster config update ...
	I0823 11:18:43.311585    1632 start.go:242] writing updated cluster config ...
	I0823 11:18:43.312012    1632 ssh_runner.go:195] Run: rm -f paused
	I0823 11:18:43.342628    1632 start.go:600] kubectl: 1.27.2, cluster: 1.28.0 (minor skew: 1)
	I0823 11:18:43.345571    1632 out.go:177] * Done! kubectl is now configured to use "addons-603000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-08-23 18:12:13 UTC, ends at Wed 2023-08-23 18:42:55 UTC. --
	Aug 23 18:30:44 addons-603000 dockerd[1103]: time="2023-08-23T18:30:44.885257574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:30:44 addons-603000 dockerd[1103]: time="2023-08-23T18:30:44.885263907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:30:44 addons-603000 cri-dockerd[995]: time="2023-08-23T18:30:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1ce470d3ac5dbd8639ca47c5556985a38ebc27169c1b2e8fafd5221adaafb8f7/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 23 18:30:45 addons-603000 dockerd[1097]: time="2023-08-23T18:30:45.233004021Z" level=warning msg="reference for unknown type: " digest="sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98" remote="ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98"
	Aug 23 18:30:49 addons-603000 cri-dockerd[995]: time="2023-08-23T18:30:49Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.19.0@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98"
	Aug 23 18:30:49 addons-603000 dockerd[1103]: time="2023-08-23T18:30:49.216878665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:30:49 addons-603000 dockerd[1103]: time="2023-08-23T18:30:49.216925454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:30:49 addons-603000 dockerd[1103]: time="2023-08-23T18:30:49.216931996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:30:49 addons-603000 dockerd[1103]: time="2023-08-23T18:30:49.216936204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:33:50 addons-603000 dockerd[1103]: time="2023-08-23T18:33:50.091151665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:33:50 addons-603000 dockerd[1103]: time="2023-08-23T18:33:50.091185331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:33:50 addons-603000 dockerd[1103]: time="2023-08-23T18:33:50.091194206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:33:50 addons-603000 dockerd[1103]: time="2023-08-23T18:33:50.091200122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:33:50 addons-603000 dockerd[1103]: time="2023-08-23T18:33:50.134236915Z" level=info msg="shim disconnected" id=c5a1a543d888cc67f431de8db9e4746af52b47a1833c348a10757deec92410c6 namespace=moby
	Aug 23 18:33:50 addons-603000 dockerd[1097]: time="2023-08-23T18:33:50.134449577Z" level=info msg="ignoring event" container=c5a1a543d888cc67f431de8db9e4746af52b47a1833c348a10757deec92410c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:33:50 addons-603000 dockerd[1103]: time="2023-08-23T18:33:50.134489534Z" level=warning msg="cleaning up after shim disconnected" id=c5a1a543d888cc67f431de8db9e4746af52b47a1833c348a10757deec92410c6 namespace=moby
	Aug 23 18:33:50 addons-603000 dockerd[1103]: time="2023-08-23T18:33:50.134494826Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.113475782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.113502490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.113681656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.113691239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:39:02 addons-603000 dockerd[1097]: time="2023-08-23T18:39:02.160574422Z" level=info msg="ignoring event" container=77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.160721255Z" level=info msg="shim disconnected" id=77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38 namespace=moby
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.160747005Z" level=warning msg="cleaning up after shim disconnected" id=77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38 namespace=moby
	Aug 23 18:39:02 addons-603000 dockerd[1103]: time="2023-08-23T18:39:02.160751297Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID
	77d7b0e8fb48b       1499ed4fbd0aa                                                                                                       3 minutes ago       Exited              minikube-ingress-dns      10                  ee55b672623b1
	c3aac1112826c       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98               12 minutes ago      Running             headlamp                  0                   1ce470d3ac5db
	e852eb8754f9d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf        29 minutes ago      Running             gcp-auth                  0                   b3f97bd7715ac
	db988e4e18bfe       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:900ee9d94e12c15caebf0f198a51b532e6e2e373f6adfdef6fc78d11f6ecc6ac   30 minutes ago      Running             gadget                    0                   12623df028552
	20ee50fe57df9       940f54a5bcae9                                                                                                       30 minutes ago      Running             kube-proxy                0                   046b3dd276694
	eb22253f397d0       97e04611ad434                                                                                                       30 minutes ago      Running             coredns                   0                   83362738c741a
	695ccc939e29c       762dce4090c5f                                                                                                       30 minutes ago      Running             kube-scheduler            0                   cd7498705ccfc
	0d85c39c58b71       9cdd6470f48c8                                                                                                       30 minutes ago      Running             etcd                      0                   52014f9a89d58
	9700e59743269       46cc66ccc7c19                                                                                                       30 minutes ago      Running             kube-controller-manager   0                   0e19c8d65b488
	594c65a938703       00543d2fe5d71                                                                                                       30 minutes ago      Running             kube-apiserver            0                   e5952ff9dbb6d
	
	* 
	* ==> coredns [eb22253f397d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:48978 - 47852 "HINFO IN 1860146809240780526.8201528298968188454. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004478075s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-603000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-603000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b0cad40b2c48c073112e0845fd1037a030ab0128
	                    minikube.k8s.io/name=addons-603000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_23T11_12_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 23 Aug 2023 18:12:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-603000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 23 Aug 2023 18:42:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 23 Aug 2023 18:41:14 +0000   Wed, 23 Aug 2023 18:12:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 23 Aug 2023 18:41:14 +0000   Wed, 23 Aug 2023 18:12:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 23 Aug 2023 18:41:14 +0000   Wed, 23 Aug 2023 18:12:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 23 Aug 2023 18:41:14 +0000   Wed, 23 Aug 2023 18:12:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-603000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 d563976ae62b43f8a1fb53a177d29cdd
	  System UUID:                d563976ae62b43f8a1fb53a177d29cdd
	  Boot ID:                    95f14d55-f8c9-49e4-bb2e-505c410326e7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-ndv9t                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  gcp-auth                    gcp-auth-d4c87556c-k5hw7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  headlamp                    headlamp-699c48fb74-fhmj2                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-blj5d                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     30m
	  kube-system                 etcd-addons-603000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         30m
	  kube-system                 kube-apiserver-addons-603000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-controller-manager-addons-603000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-ingress-dns-minikube                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-proxy-wxvd5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-scheduler-addons-603000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 30m   kube-proxy       
	  Normal  Starting                 30m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  30m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30m   kubelet          Node addons-603000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30m   kubelet          Node addons-603000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30m   kubelet          Node addons-603000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                30m   kubelet          Node addons-603000 status is now: NodeReady
	  Normal  RegisteredNode           30m   node-controller  Node addons-603000 event: Registered Node addons-603000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.645801] EINJ: EINJ table not found.
	[  +0.508913] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043174] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000868] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.060149] systemd-fstab-generator[476]: Ignoring "noauto" for root device
	[  +0.082417] systemd-fstab-generator[487]: Ignoring "noauto" for root device
	[  +0.495013] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.161467] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +0.082809] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[  +0.085797] systemd-fstab-generator[726]: Ignoring "noauto" for root device
	[  +1.141175] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.084612] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +0.080569] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +0.082356] systemd-fstab-generator[935]: Ignoring "noauto" for root device
	[  +0.081344] systemd-fstab-generator[946]: Ignoring "noauto" for root device
	[  +0.093418] systemd-fstab-generator[988]: Ignoring "noauto" for root device
	[  +2.547960] systemd-fstab-generator[1090]: Ignoring "noauto" for root device
	[  +2.110352] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.040486] systemd-fstab-generator[1407]: Ignoring "noauto" for root device
	[  +4.639082] systemd-fstab-generator[2281]: Ignoring "noauto" for root device
	[ +13.918576] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.422998] kauditd_printk_skb: 19 callbacks suppressed
	[  +9.996433] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Aug23 18:13] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.103613] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [0d85c39c58b7] <==
	* {"level":"info","ts":"2023-08-23T18:12:26.555101Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:12:26.555137Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:12:26.551069Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-23T18:12:26.555541Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-23T18:12:26.563277Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-23T18:12:26.563307Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-08-23T18:12:48.393724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.007861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-blj5d\" ","response":"range_response_count:1 size:4755"}
	{"level":"info","ts":"2023-08-23T18:12:48.39376Z","caller":"traceutil/trace.go:171","msg":"trace[2046528636] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-blj5d; range_end:; response_count:1; response_revision:423; }","duration":"169.059532ms","start":"2023-08-23T18:12:48.224693Z","end":"2023-08-23T18:12:48.393753Z","steps":["trace[2046528636] 'range keys from in-memory index tree'  (duration: 168.95968ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-23T18:12:48.39392Z","caller":"traceutil/trace.go:171","msg":"trace[720409665] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"100.636151ms","start":"2023-08-23T18:12:48.29328Z","end":"2023-08-23T18:12:48.393917Z","steps":["trace[720409665] 'process raft request'  (duration: 77.612181ms)","trace[720409665] 'compare'  (duration: 22.773829ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-23T18:12:50.777156Z","caller":"traceutil/trace.go:171","msg":"trace[471484672] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"104.054592ms","start":"2023-08-23T18:12:50.673089Z","end":"2023-08-23T18:12:50.777143Z","steps":["trace[471484672] 'process raft request'  (duration: 103.971962ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-23T18:22:26.650207Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":655}
	{"level":"info","ts":"2023-08-23T18:22:26.650931Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":655,"took":"518.817µs","hash":1262660470}
	{"level":"info","ts":"2023-08-23T18:22:26.650949Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1262660470,"revision":655,"compact-revision":-1}
	{"level":"info","ts":"2023-08-23T18:27:26.652533Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":748}
	{"level":"info","ts":"2023-08-23T18:27:26.652937Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":748,"took":"267.17µs","hash":3279716501}
	{"level":"info","ts":"2023-08-23T18:27:26.652952Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3279716501,"revision":748,"compact-revision":655}
	{"level":"info","ts":"2023-08-23T18:32:26.655732Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":842}
	{"level":"info","ts":"2023-08-23T18:32:26.656056Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":842,"took":"197.577µs","hash":1969901087}
	{"level":"info","ts":"2023-08-23T18:32:26.656068Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1969901087,"revision":842,"compact-revision":748}
	{"level":"info","ts":"2023-08-23T18:37:26.658244Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":973}
	{"level":"info","ts":"2023-08-23T18:37:26.658629Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":973,"took":"234.457µs","hash":4022607794}
	{"level":"info","ts":"2023-08-23T18:37:26.658641Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4022607794,"revision":973,"compact-revision":842}
	{"level":"info","ts":"2023-08-23T18:42:26.66149Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1071}
	{"level":"info","ts":"2023-08-23T18:42:26.661796Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1071,"took":"188.042µs","hash":2460628194}
	{"level":"info","ts":"2023-08-23T18:42:26.661807Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2460628194,"revision":1071,"compact-revision":973}
	
	* 
	* ==> gcp-auth [e852eb8754f9] <==
	* 2023/08/23 18:13:08 GCP Auth Webhook started!
	2023/08/23 18:13:18 failed to get releases file: Get "https://storage.googleapis.com/minikube-gcp-auth/releases.json": dial tcp: lookup storage.googleapis.com on 10.96.0.10:53: read udp 10.244.0.5:60048->10.96.0.10:53: read: connection refused
	2023/08/23 18:30:44 Ready to marshal response ...
	2023/08/23 18:30:44 Ready to write response ...
	2023/08/23 18:30:44 Ready to marshal response ...
	2023/08/23 18:30:44 Ready to write response ...
	2023/08/23 18:30:44 Ready to marshal response ...
	2023/08/23 18:30:44 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:42:56 up 30 min,  0 users,  load average: 0.06, 0.13, 0.10
	Linux addons-603000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [594c65a93870] <==
	* I0823 18:12:28.325487       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0823 18:12:28.327337       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0823 18:12:28.327348       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0823 18:12:28.483580       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0823 18:12:28.494448       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0823 18:12:28.528987       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0823 18:12:28.530764       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.2]
	I0823 18:12:28.531188       1 controller.go:624] quota admission added evaluator for: endpoints
	I0823 18:12:28.532349       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0823 18:12:29.394948       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0823 18:12:30.000403       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0823 18:12:30.005168       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0823 18:12:30.011907       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0823 18:12:42.297794       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0823 18:12:42.999693       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0823 18:12:43.740564       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:12:50.388002       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.104.33.188"}
	I0823 18:12:50.406209       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0823 18:17:27.368575       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:22:27.369052       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:27:27.369080       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:30:44.504708       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.202.15"}
	I0823 18:32:27.369968       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:37:27.370986       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:42:27.371798       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [9700e5974326] <==
	* I0823 18:39:42.301310       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:39:57.301960       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:39:57.302104       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:40:12.302958       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:40:12.302981       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:40:27.303899       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:40:27.304191       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:40:42.304147       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:40:42.304173       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:40:57.304649       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:40:57.304835       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:41:12.305005       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:41:12.305092       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:41:27.305078       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:41:27.305175       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:41:42.305198       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:41:42.305221       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:41:57.305830       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:41:57.305963       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:42:12.306917       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:42:12.306942       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:42:27.308252       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:42:27.308335       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0823 18:42:42.308556       1 pv_controller.go:1562] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0823 18:42:42.308592       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	
	* 
	* ==> kube-proxy [20ee50fe57df] <==
	* I0823 18:12:44.324386       1 server_others.go:69] "Using iptables proxy"
	I0823 18:12:44.336042       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0823 18:12:44.344520       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0823 18:12:44.344533       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0823 18:12:44.345190       1 server_others.go:152] "Using iptables Proxier"
	I0823 18:12:44.345242       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0823 18:12:44.345409       1 server.go:846] "Version info" version="v1.28.0"
	I0823 18:12:44.345419       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0823 18:12:44.346019       1 config.go:188] "Starting service config controller"
	I0823 18:12:44.346074       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0823 18:12:44.346113       1 config.go:97] "Starting endpoint slice config controller"
	I0823 18:12:44.346120       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0823 18:12:44.346526       1 config.go:315] "Starting node config controller"
	I0823 18:12:44.346554       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0823 18:12:44.446872       1 shared_informer.go:318] Caches are synced for node config
	I0823 18:12:44.446887       1 shared_informer.go:318] Caches are synced for service config
	I0823 18:12:44.446897       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [695ccc939e29] <==
	* W0823 18:12:27.662067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0823 18:12:27.662073       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0823 18:12:27.662103       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662127       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662242       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0823 18:12:27.662264       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0823 18:12:27.662344       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662351       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0823 18:12:27.662399       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0823 18:12:27.662439       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0823 18:12:27.662446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0823 18:12:27.662483       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662491       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662520       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0823 18:12:27.662530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0823 18:12:27.662556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0823 18:12:27.662568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0823 18:12:27.662623       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0823 18:12:27.662630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0823 18:12:27.662674       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0823 18:12:27.662681       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0823 18:12:28.959674       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-23 18:12:13 UTC, ends at Wed 2023-08-23 18:42:56 UTC. --
	Aug 23 18:41:18 addons-603000 kubelet[2300]: I0823 18:41:18.071343    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:41:18 addons-603000 kubelet[2300]: E0823 18:41:18.071743    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:41:30 addons-603000 kubelet[2300]: E0823 18:41:30.075968    2300 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 23 18:41:30 addons-603000 kubelet[2300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 23 18:41:30 addons-603000 kubelet[2300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 23 18:41:30 addons-603000 kubelet[2300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 23 18:41:31 addons-603000 kubelet[2300]: I0823 18:41:31.071365    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:41:31 addons-603000 kubelet[2300]: E0823 18:41:31.071560    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:41:42 addons-603000 kubelet[2300]: I0823 18:41:42.072026    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:41:42 addons-603000 kubelet[2300]: E0823 18:41:42.072413    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:41:55 addons-603000 kubelet[2300]: I0823 18:41:55.071811    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:41:55 addons-603000 kubelet[2300]: E0823 18:41:55.071972    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:42:10 addons-603000 kubelet[2300]: I0823 18:42:10.071260    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:42:10 addons-603000 kubelet[2300]: E0823 18:42:10.071597    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:42:25 addons-603000 kubelet[2300]: I0823 18:42:25.073765    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:42:25 addons-603000 kubelet[2300]: E0823 18:42:25.073926    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:42:30 addons-603000 kubelet[2300]: W0823 18:42:30.069953    2300 machine.go:65] Cannot read vendor id correctly, set empty.
	Aug 23 18:42:30 addons-603000 kubelet[2300]: E0823 18:42:30.076386    2300 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 23 18:42:30 addons-603000 kubelet[2300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 23 18:42:30 addons-603000 kubelet[2300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 23 18:42:30 addons-603000 kubelet[2300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 23 18:42:40 addons-603000 kubelet[2300]: I0823 18:42:40.073476    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:42:40 addons-603000 kubelet[2300]: E0823 18:42:40.073608    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:42:55 addons-603000 kubelet[2300]: I0823 18:42:55.071266    2300 scope.go:117] "RemoveContainer" containerID="77d7b0e8fb48b6393cff1e50990ed41db1257d869c113d4a17664cfa3412ff38"
	Aug 23 18:42:55 addons-603000 kubelet[2300]: E0823 18:42:55.071358    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-603000 -n addons-603000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-603000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (720.84s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (820.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:831: failed waiting for cloud-spanner-emulator deployment to stabilize: timed out waiting for the condition
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
addons_test.go:833: ***** TestAddons/parallel/CloudSpanner: pod "app=cloud-spanner-emulator" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:833: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-603000 -n addons-603000
addons_test.go:833: TestAddons/parallel/CloudSpanner: showing logs for failed pods as of 2023-08-23 11:30:43.435703 -0700 PDT m=+1161.576332167
addons_test.go:834: failed waiting for app=cloud-spanner-emulator pod: app=cloud-spanner-emulator within 6m0s: context deadline exceeded
addons_test.go:836: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-603000
addons_test.go:836: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-603000: exit status 10 (1m39.60719825s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE: disable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/deployment.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/deployment.yaml" does not exist
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:837: failed to disable cloud-spanner addon: args "out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-603000" : exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-603000 -n addons-603000
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-603000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:11 PDT |                     |
	|         | -p download-only-861000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:11 PDT |                     |
	|         | -p download-only-861000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| delete  | -p download-only-861000        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| delete  | -p download-only-861000        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| start   | --download-only -p             | binary-mirror-536000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT |                     |
	|         | binary-mirror-536000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49317         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-536000        | binary-mirror-536000 | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:12 PDT |
	| start   | -p addons-603000               | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:12 PDT | 23 Aug 23 11:18 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:30 PDT |                     |
	|         | addons-603000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-603000        | jenkins | v1.31.2 | 23 Aug 23 11:30 PDT | 23 Aug 23 11:30 PDT |
	|         | -p addons-603000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/23 11:12:01
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0823 11:12:01.473993    1632 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:12:01.474116    1632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:12:01.474118    1632 out.go:309] Setting ErrFile to fd 2...
	I0823 11:12:01.474121    1632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:12:01.474234    1632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:12:01.475282    1632 out.go:303] Setting JSON to false
	I0823 11:12:01.490493    1632 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":694,"bootTime":1692813627,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:12:01.490568    1632 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:12:01.495699    1632 out.go:177] * [addons-603000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:12:01.502675    1632 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:12:01.502715    1632 notify.go:220] Checking for updates...
	I0823 11:12:01.509745    1632 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:12:01.512689    1632 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:12:01.515810    1632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:12:01.518747    1632 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:12:01.521752    1632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:12:01.524877    1632 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:12:01.528782    1632 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 11:12:01.535731    1632 start.go:298] selected driver: qemu2
	I0823 11:12:01.535736    1632 start.go:902] validating driver "qemu2" against <nil>
	I0823 11:12:01.535742    1632 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:12:01.537642    1632 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 11:12:01.540695    1632 out.go:177] * Automatically selected the socket_vmnet network
	I0823 11:12:01.543724    1632 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 11:12:01.543758    1632 cni.go:84] Creating CNI manager for ""
	I0823 11:12:01.543773    1632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:12:01.543776    1632 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 11:12:01.543783    1632 start_flags.go:319] config:
	{Name:addons-603000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0}
	I0823 11:12:01.547827    1632 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:12:01.555769    1632 out.go:177] * Starting control plane node addons-603000 in cluster addons-603000
	I0823 11:12:01.559683    1632 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:12:01.559703    1632 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 11:12:01.559720    1632 cache.go:57] Caching tarball of preloaded images
	I0823 11:12:01.559801    1632 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 11:12:01.559806    1632 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 11:12:01.560010    1632 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/config.json ...
	I0823 11:12:01.560023    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/config.json: {Name:mkc2981401a387d0fc588c73edb3c2135fa892ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:01.560240    1632 start.go:365] acquiring machines lock for addons-603000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:12:01.560349    1632 start.go:369] acquired machines lock for "addons-603000" in 102.292µs
	I0823 11:12:01.560360    1632 start.go:93] Provisioning new machine with config: &{Name:addons-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.28.0 ClusterName:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:12:01.560401    1632 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:12:01.568742    1632 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0823 11:12:01.911891    1632 start.go:159] libmachine.API.Create for "addons-603000" (driver="qemu2")
	I0823 11:12:01.911940    1632 client.go:168] LocalClient.Create starting
	I0823 11:12:01.912112    1632 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:12:02.016848    1632 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:12:02.196172    1632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:12:02.677712    1632 main.go:141] libmachine: Creating SSH key...
	I0823 11:12:02.710073    1632 main.go:141] libmachine: Creating Disk image...
	I0823 11:12:02.710078    1632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:12:02.710280    1632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2
	I0823 11:12:02.789119    1632 main.go:141] libmachine: STDOUT: 
	I0823 11:12:02.789143    1632 main.go:141] libmachine: STDERR: 
	I0823 11:12:02.789223    1632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2 +20000M
	I0823 11:12:02.796505    1632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:12:02.796516    1632 main.go:141] libmachine: STDERR: 
	I0823 11:12:02.796535    1632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2
	I0823 11:12:02.796546    1632 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:12:02.796585    1632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:3b:71:83:24:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/disk.qcow2
	I0823 11:12:02.868731    1632 main.go:141] libmachine: STDOUT: 
	I0823 11:12:02.868763    1632 main.go:141] libmachine: STDERR: 
	I0823 11:12:02.868768    1632 main.go:141] libmachine: Attempt 0
	I0823 11:12:02.868785    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:04.870905    1632 main.go:141] libmachine: Attempt 1
	I0823 11:12:04.871005    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:06.873192    1632 main.go:141] libmachine: Attempt 2
	I0823 11:12:06.873220    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:08.875257    1632 main.go:141] libmachine: Attempt 3
	I0823 11:12:08.875269    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:10.877286    1632 main.go:141] libmachine: Attempt 4
	I0823 11:12:10.877310    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:12.879362    1632 main.go:141] libmachine: Attempt 5
	I0823 11:12:12.879382    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:14.881424    1632 main.go:141] libmachine: Attempt 6
	I0823 11:12:14.881454    1632 main.go:141] libmachine: Searching for da:3b:71:83:24:7a in /var/db/dhcpd_leases ...
	I0823 11:12:14.881611    1632 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0823 11:12:14.881699    1632 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e79d7d}
	I0823 11:12:14.881706    1632 main.go:141] libmachine: Found match: da:3b:71:83:24:7a
	I0823 11:12:14.881720    1632 main.go:141] libmachine: IP: 192.168.105.2
	I0823 11:12:14.881729    1632 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0823 11:12:15.887997    1632 machine.go:88] provisioning docker machine ...
	I0823 11:12:15.888014    1632 buildroot.go:166] provisioning hostname "addons-603000"
	I0823 11:12:15.888459    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:15.888726    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:15.888732    1632 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-603000 && echo "addons-603000" | sudo tee /etc/hostname
	I0823 11:12:15.942083    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-603000
	
	I0823 11:12:15.942154    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:15.942421    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:15.942431    1632 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-603000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-603000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-603000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0823 11:12:15.994730    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0823 11:12:15.994741    1632 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17086-1133/.minikube CaCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17086-1133/.minikube}
	I0823 11:12:15.994754    1632 buildroot.go:174] setting up certificates
	I0823 11:12:15.994761    1632 provision.go:83] configureAuth start
	I0823 11:12:15.994765    1632 provision.go:138] copyHostCerts
	I0823 11:12:15.994890    1632 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.pem (1078 bytes)
	I0823 11:12:15.995099    1632 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/cert.pem (1123 bytes)
	I0823 11:12:15.995204    1632 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/key.pem (1679 bytes)
	I0823 11:12:15.995273    1632 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem org=jenkins.addons-603000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-603000]
	I0823 11:12:16.087853    1632 provision.go:172] copyRemoteCerts
	I0823 11:12:16.087919    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0823 11:12:16.087927    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.117611    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0823 11:12:16.124255    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0823 11:12:16.130804    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0823 11:12:16.138250    1632 provision.go:86] duration metric: configureAuth took 143.474625ms
	I0823 11:12:16.138257    1632 buildroot.go:189] setting minikube options for container-runtime
	I0823 11:12:16.138365    1632 config.go:182] Loaded profile config "addons-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:12:16.138406    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.138621    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.138626    1632 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0823 11:12:16.188528    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0823 11:12:16.188533    1632 buildroot.go:70] root file system type: tmpfs
	I0823 11:12:16.188589    1632 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0823 11:12:16.188629    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.188853    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.188886    1632 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0823 11:12:16.244639    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0823 11:12:16.244690    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.244933    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.244942    1632 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0823 11:12:16.607992    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0823 11:12:16.608005    1632 machine.go:91] provisioned docker machine in 720.015167ms
	I0823 11:12:16.608010    1632 client.go:171] LocalClient.Create took 14.696413292s
	I0823 11:12:16.608025    1632 start.go:167] duration metric: libmachine.API.Create for "addons-603000" took 14.696491208s
	I0823 11:12:16.608031    1632 start.go:300] post-start starting for "addons-603000" (driver="qemu2")
	I0823 11:12:16.608036    1632 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0823 11:12:16.608110    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0823 11:12:16.608120    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.635532    1632 ssh_runner.go:195] Run: cat /etc/os-release
	I0823 11:12:16.636917    1632 info.go:137] Remote host: Buildroot 2021.02.12
	I0823 11:12:16.636928    1632 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/addons for local assets ...
	I0823 11:12:16.636998    1632 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/files for local assets ...
	I0823 11:12:16.637031    1632 start.go:303] post-start completed in 28.997291ms
	I0823 11:12:16.637396    1632 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/config.json ...
	I0823 11:12:16.637545    1632 start.go:128] duration metric: createHost completed in 15.077498208s
	I0823 11:12:16.637587    1632 main.go:141] libmachine: Using SSH client type: native
	I0823 11:12:16.637809    1632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100a8e1e0] 0x100a90c40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0823 11:12:16.637813    1632 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0823 11:12:16.687915    1632 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692814336.418860543
	
	I0823 11:12:16.687923    1632 fix.go:206] guest clock: 1692814336.418860543
	I0823 11:12:16.687928    1632 fix.go:219] Guest: 2023-08-23 11:12:16.418860543 -0700 PDT Remote: 2023-08-23 11:12:16.637549 -0700 PDT m=+15.183153626 (delta=-218.688457ms)
	I0823 11:12:16.687939    1632 fix.go:190] guest clock delta is within tolerance: -218.688457ms
	I0823 11:12:16.687945    1632 start.go:83] releasing machines lock for "addons-603000", held for 15.127950375s
	I0823 11:12:16.688248    1632 ssh_runner.go:195] Run: cat /version.json
	I0823 11:12:16.688258    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.688283    1632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0823 11:12:16.688315    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:16.830956    1632 ssh_runner.go:195] Run: systemctl --version
	I0823 11:12:16.833845    1632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0823 11:12:16.836334    1632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0823 11:12:16.836377    1632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0823 11:12:16.842975    1632 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0823 11:12:16.842984    1632 start.go:466] detecting cgroup driver to use...
	I0823 11:12:16.843114    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:12:16.850298    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0823 11:12:16.854361    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0823 11:12:16.858207    1632 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0823 11:12:16.858231    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0823 11:12:16.861904    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:12:16.865421    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0823 11:12:16.868360    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:12:16.871352    1632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0823 11:12:16.874459    1632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0823 11:12:16.878023    1632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0823 11:12:16.881145    1632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0823 11:12:16.883828    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:16.960568    1632 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0823 11:12:16.969157    1632 start.go:466] detecting cgroup driver to use...
	I0823 11:12:16.969235    1632 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0823 11:12:16.974606    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:12:16.980054    1632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0823 11:12:16.985654    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:12:16.990412    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 11:12:16.995694    1632 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0823 11:12:17.032758    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 11:12:17.037631    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:12:17.043070    1632 ssh_runner.go:195] Run: which cri-dockerd
	I0823 11:12:17.044366    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0823 11:12:17.046802    1632 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0823 11:12:17.051842    1632 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0823 11:12:17.122781    1632 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0823 11:12:17.203234    1632 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0823 11:12:17.203250    1632 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0823 11:12:17.208465    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:17.289831    1632 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0823 11:12:18.445051    1632 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.15523225s)
	I0823 11:12:18.445131    1632 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0823 11:12:18.516370    1632 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0823 11:12:18.597904    1632 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0823 11:12:18.678212    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:18.761579    1632 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0823 11:12:18.769154    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:18.854594    1632 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0823 11:12:18.877760    1632 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0823 11:12:18.877836    1632 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0823 11:12:18.879918    1632 start.go:534] Will wait 60s for crictl version
	I0823 11:12:18.879971    1632 ssh_runner.go:195] Run: which crictl
	I0823 11:12:18.881335    1632 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0823 11:12:18.895610    1632 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0823 11:12:18.895681    1632 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:12:18.905007    1632 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:12:18.918762    1632 out.go:204] * Preparing Kubernetes v1.28.0 on Docker 24.0.4 ...
	I0823 11:12:18.918902    1632 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0823 11:12:18.920430    1632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 11:12:18.924018    1632 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:12:18.924057    1632 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:12:18.929191    1632 docker.go:636] Got preloaded images: 
	I0823 11:12:18.929199    1632 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.0 wasn't preloaded
	I0823 11:12:18.929238    1632 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0823 11:12:18.932021    1632 ssh_runner.go:195] Run: which lz4
	I0823 11:12:18.933304    1632 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0823 11:12:18.934619    1632 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0823 11:12:18.934635    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356920680 bytes)
	I0823 11:12:20.248127    1632 docker.go:600] Took 1.314887 seconds to copy over tarball
	I0823 11:12:20.248187    1632 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0823 11:12:21.298381    1632 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.050206542s)
	I0823 11:12:21.298396    1632 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0823 11:12:21.314358    1632 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0823 11:12:21.317830    1632 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0823 11:12:21.322906    1632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:12:21.401236    1632 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0823 11:12:23.532716    1632 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.131514959s)
	I0823 11:12:23.532818    1632 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:12:23.539039    1632 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0823 11:12:23.539047    1632 cache_images.go:84] Images are preloaded, skipping loading
	I0823 11:12:23.539094    1632 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0823 11:12:23.547073    1632 cni.go:84] Creating CNI manager for ""
	I0823 11:12:23.547082    1632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:12:23.547113    1632 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0823 11:12:23.547124    1632 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-603000 NodeName:addons-603000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0823 11:12:23.547195    1632 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-603000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0823 11:12:23.547230    1632 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-603000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0823 11:12:23.547289    1632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0823 11:12:23.550418    1632 binaries.go:44] Found k8s binaries, skipping transfer
	I0823 11:12:23.550450    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0823 11:12:23.553530    1632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0823 11:12:23.558535    1632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0823 11:12:23.563462    1632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0823 11:12:23.568191    1632 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0823 11:12:23.569452    1632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 11:12:23.573456    1632 certs.go:56] Setting up /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000 for IP: 192.168.105.2
	I0823 11:12:23.573466    1632 certs.go:190] acquiring lock for shared ca certs: {Name:mk376b7368cd0c31aa9f4550a2157cb5219e493f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.573659    1632 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key
	I0823 11:12:23.735121    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt ...
	I0823 11:12:23.735127    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt: {Name:mk63b4dc9bf64742b3a2443ec5bb13fecbad997f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.735357    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key ...
	I0823 11:12:23.735361    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key: {Name:mk69cc2166a61f2546ed0135cfb5fd8f801bd14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.735483    1632 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key
	I0823 11:12:23.796242    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt ...
	I0823 11:12:23.796246    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt: {Name:mk419a1ff00ed72437ca83474e3a88fdfaeeeee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.796377    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key ...
	I0823 11:12:23.796380    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key: {Name:mk4c7f9a48478507429af69af483ca06fbc978e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:23.796514    1632 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.key
	I0823 11:12:23.796523    1632 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt with IP's: []
	I0823 11:12:24.000999    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt ...
	I0823 11:12:24.001008    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: {Name:mkf571a66a11fe461d7b44eaf6283e5341b1a7f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.001274    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.key ...
	I0823 11:12:24.001277    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.key: {Name:mkcf4093aeeeecec47bfe93d95e186f81025a994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.001388    1632 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969
	I0823 11:12:24.001398    1632 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0823 11:12:24.112607    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969 ...
	I0823 11:12:24.112611    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969: {Name:mkc8690ae4b550cd6392c142a6c0c10c4e6a5520 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.112766    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969 ...
	I0823 11:12:24.112769    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969: {Name:mkce56fce274146ffaf24072ae4ff4b5e0acf555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.112886    1632 certs.go:337] copying /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt
	I0823 11:12:24.113091    1632 certs.go:341] copying /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key
	I0823 11:12:24.113203    1632 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key
	I0823 11:12:24.113221    1632 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt with IP's: []
	I0823 11:12:24.164570    1632 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt ...
	I0823 11:12:24.164574    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt: {Name:mk78bcc5a9a59a81499b830c0df5af65d761e828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.164724    1632 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key ...
	I0823 11:12:24.164727    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key: {Name:mk7a6ea56c8d07401b6d3adda9d302f5989259f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:24.165026    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem (1675 bytes)
	I0823 11:12:24.165052    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem (1078 bytes)
	I0823 11:12:24.165076    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem (1123 bytes)
	I0823 11:12:24.165112    1632 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem (1679 bytes)
	I0823 11:12:24.165474    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0823 11:12:24.173558    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0823 11:12:24.180163    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0823 11:12:24.186570    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0823 11:12:24.193721    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0823 11:12:24.200859    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0823 11:12:24.207727    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0823 11:12:24.214435    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0823 11:12:24.221806    1632 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0823 11:12:24.228902    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0823 11:12:24.234839    1632 ssh_runner.go:195] Run: openssl version
	I0823 11:12:24.236947    1632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0823 11:12:24.239793    1632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:12:24.241347    1632 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 23 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:12:24.241366    1632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:12:24.243416    1632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0823 11:12:24.246590    1632 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0823 11:12:24.247919    1632 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0823 11:12:24.247965    1632 kubeadm.go:404] StartCluster: {Name:addons-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterNam
e:addons-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:12:24.248029    1632 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0823 11:12:24.253769    1632 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0823 11:12:24.257096    1632 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0823 11:12:24.259889    1632 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0823 11:12:24.262662    1632 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0823 11:12:24.262675    1632 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0823 11:12:24.285841    1632 kubeadm.go:322] [init] Using Kubernetes version: v1.28.0
	I0823 11:12:24.285870    1632 kubeadm.go:322] [preflight] Running pre-flight checks
	I0823 11:12:24.340213    1632 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0823 11:12:24.340268    1632 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0823 11:12:24.340328    1632 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0823 11:12:24.397608    1632 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0823 11:12:24.406812    1632 out.go:204]   - Generating certificates and keys ...
	I0823 11:12:24.406874    1632 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0823 11:12:24.406907    1632 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0823 11:12:24.448173    1632 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0823 11:12:24.501859    1632 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0823 11:12:24.630928    1632 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0823 11:12:24.705637    1632 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0823 11:12:24.853624    1632 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0823 11:12:24.853703    1632 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-603000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0823 11:12:24.984031    1632 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0823 11:12:24.984098    1632 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-603000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0823 11:12:25.055121    1632 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0823 11:12:25.098919    1632 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0823 11:12:25.161126    1632 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0823 11:12:25.161156    1632 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0823 11:12:25.251684    1632 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0823 11:12:25.312453    1632 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0823 11:12:25.380616    1632 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0823 11:12:25.465178    1632 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0823 11:12:25.465413    1632 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0823 11:12:25.466498    1632 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0823 11:12:25.474829    1632 out.go:204]   - Booting up control plane ...
	I0823 11:12:25.474890    1632 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0823 11:12:25.474941    1632 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0823 11:12:25.474973    1632 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0823 11:12:25.475045    1632 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0823 11:12:25.475095    1632 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0823 11:12:25.475114    1632 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0823 11:12:25.557775    1632 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0823 11:12:29.059855    1632 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.502356 seconds
	I0823 11:12:29.059917    1632 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0823 11:12:29.066243    1632 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0823 11:12:29.575724    1632 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0823 11:12:29.575818    1632 kubeadm.go:322] [mark-control-plane] Marking the node addons-603000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0823 11:12:30.082813    1632 kubeadm.go:322] [bootstrap-token] Using token: l2h0hz.d0iyqenbl2749eth
	I0823 11:12:30.096111    1632 out.go:204]   - Configuring RBAC rules ...
	I0823 11:12:30.096178    1632 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0823 11:12:30.096240    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0823 11:12:30.098015    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0823 11:12:30.099101    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0823 11:12:30.100289    1632 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0823 11:12:30.101319    1632 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0823 11:12:30.105824    1632 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0823 11:12:30.274483    1632 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0823 11:12:30.492880    1632 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0823 11:12:30.493261    1632 kubeadm.go:322] 
	I0823 11:12:30.493290    1632 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0823 11:12:30.493295    1632 kubeadm.go:322] 
	I0823 11:12:30.493350    1632 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0823 11:12:30.493355    1632 kubeadm.go:322] 
	I0823 11:12:30.493371    1632 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0823 11:12:30.493409    1632 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0823 11:12:30.493436    1632 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0823 11:12:30.493440    1632 kubeadm.go:322] 
	I0823 11:12:30.493470    1632 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0823 11:12:30.493473    1632 kubeadm.go:322] 
	I0823 11:12:30.493507    1632 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0823 11:12:30.493512    1632 kubeadm.go:322] 
	I0823 11:12:30.493534    1632 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0823 11:12:30.493571    1632 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0823 11:12:30.493608    1632 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0823 11:12:30.493612    1632 kubeadm.go:322] 
	I0823 11:12:30.493652    1632 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0823 11:12:30.493686    1632 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0823 11:12:30.493689    1632 kubeadm.go:322] 
	I0823 11:12:30.493755    1632 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token l2h0hz.d0iyqenbl2749eth \
	I0823 11:12:30.493808    1632 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bfb9bdd14f4bb2775ce7c1686c5d9f5d740ad24201ba0b194f17c1c3700b6f \
	I0823 11:12:30.493820    1632 kubeadm.go:322] 	--control-plane 
	I0823 11:12:30.493824    1632 kubeadm.go:322] 
	I0823 11:12:30.493863    1632 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0823 11:12:30.493867    1632 kubeadm.go:322] 
	I0823 11:12:30.493911    1632 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token l2h0hz.d0iyqenbl2749eth \
	I0823 11:12:30.493962    1632 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bfb9bdd14f4bb2775ce7c1686c5d9f5d740ad24201ba0b194f17c1c3700b6f 
	I0823 11:12:30.494020    1632 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0823 11:12:30.494027    1632 cni.go:84] Creating CNI manager for ""
	I0823 11:12:30.494035    1632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:12:30.502806    1632 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0823 11:12:30.507845    1632 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0823 11:12:30.511861    1632 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0823 11:12:30.516952    1632 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0823 11:12:30.517017    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:30.517023    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=b0cad40b2c48c073112e0845fd1037a030ab0128 minikube.k8s.io/name=addons-603000 minikube.k8s.io/updated_at=2023_08_23T11_12_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:30.588330    1632 ops.go:34] apiserver oom_adj: -16
	I0823 11:12:30.588368    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:30.620791    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:31.158369    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:31.658375    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:32.158344    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:32.658320    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:33.158345    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:33.658343    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:34.158277    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:34.658250    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:35.158228    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:35.658245    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:36.158202    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:36.658170    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:37.156385    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:37.656540    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:38.158165    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:38.658197    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:39.158131    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:39.658110    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:40.158144    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:40.658128    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:41.158110    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:41.658102    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:42.158090    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:42.658123    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:43.158020    1632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:12:43.214120    1632 kubeadm.go:1081] duration metric: took 12.6974425s to wait for elevateKubeSystemPrivileges.
	I0823 11:12:43.214131    1632 kubeadm.go:406] StartCluster complete in 18.96661775s
	I0823 11:12:43.214140    1632 settings.go:142] acquiring lock: {Name:mk727aa2e008442bafb3bca409aa11c730e46bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:43.214301    1632 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:12:43.214543    1632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/kubeconfig: {Name:mk62eedfc40b21b153318958a231856d07330e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:12:43.214750    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0823 11:12:43.214793    1632 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0823 11:12:43.214848    1632 addons.go:69] Setting volumesnapshots=true in profile "addons-603000"
	I0823 11:12:43.214857    1632 addons.go:231] Setting addon volumesnapshots=true in "addons-603000"
	I0823 11:12:43.214861    1632 addons.go:69] Setting ingress=true in profile "addons-603000"
	I0823 11:12:43.214897    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.214901    1632 addons.go:69] Setting default-storageclass=true in profile "addons-603000"
	I0823 11:12:43.214897    1632 addons.go:231] Setting addon ingress=true in "addons-603000"
	I0823 11:12:43.214917    1632 addons.go:69] Setting cloud-spanner=true in profile "addons-603000"
	I0823 11:12:43.214937    1632 addons.go:69] Setting gcp-auth=true in profile "addons-603000"
	I0823 11:12:43.214943    1632 mustload.go:65] Loading cluster: addons-603000
	I0823 11:12:43.214929    1632 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-603000"
	I0823 11:12:43.214951    1632 addons.go:231] Setting addon cloud-spanner=true in "addons-603000"
	I0823 11:12:43.215174    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215232    1632 addons.go:69] Setting metrics-server=true in profile "addons-603000"
	I0823 11:12:43.215240    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215246    1632 addons.go:231] Setting addon metrics-server=true in "addons-603000"
	I0823 11:12:43.215279    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215384    1632 addons.go:69] Setting registry=true in profile "addons-603000"
	I0823 11:12:43.215406    1632 addons.go:231] Setting addon registry=true in "addons-603000"
	I0823 11:12:43.215406    1632 config.go:182] Loaded profile config "addons-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:12:43.215469    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215573    1632 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-603000"
	I0823 11:12:43.215613    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215623    1632 addons.go:69] Setting ingress-dns=true in profile "addons-603000"
	I0823 11:12:43.215632    1632 addons.go:69] Setting storage-provisioner=true in profile "addons-603000"
	I0823 11:12:43.215635    1632 addons.go:231] Setting addon ingress-dns=true in "addons-603000"
	I0823 11:12:43.215647    1632 addons.go:231] Setting addon storage-provisioner=true in "addons-603000"
	I0823 11:12:43.215715    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.215748    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.214927    1632 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-603000"
	I0823 11:12:43.215926    1632 addons.go:69] Setting inspektor-gadget=true in profile "addons-603000"
	I0823 11:12:43.215930    1632 addons.go:231] Setting addon inspektor-gadget=true in "addons-603000"
	I0823 11:12:43.215950    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.216258    1632 config.go:182] Loaded profile config "addons-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	W0823 11:12:43.216285    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.216296    1632 addons.go:277] "addons-603000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0823 11:12:43.216301    1632 addons.go:467] Verifying addon ingress=true in "addons-603000"
	I0823 11:12:43.220697    1632 out.go:177] * Verifying ingress addon...
	W0823 11:12:43.216646    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.216836    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.216925    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217082    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217134    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217426    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	W0823 11:12:43.217553    1632 host.go:54] host status for "addons-603000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/monitor: connect: connection refused
	I0823 11:12:43.218066    1632 host.go:66] Checking if "addons-603000" exists ...
	W0823 11:12:43.229685    1632 addons.go:277] "addons-603000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229693    1632 addons.go:277] "addons-603000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229707    1632 addons_storage_classes.go:55] "addons-603000" is not running, writing default-storageclass=true to disk and skipping enablement
	W0823 11:12:43.229696    1632 addons.go:277] "addons-603000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229727    1632 addons.go:277] "addons-603000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0823 11:12:43.229749    1632 addons.go:277] "addons-603000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0823 11:12:43.230154    1632 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0823 11:12:43.235671    1632 addons.go:277] "addons-603000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0823 11:12:43.235676    1632 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.19.0
	I0823 11:12:43.237659    1632 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-603000" context rescaled to 1 replicas
	I0823 11:12:43.239627    1632 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:12:43.245678    1632 out.go:177] * Verifying Kubernetes components...
	I0823 11:12:43.239670    1632 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0823 11:12:43.239676    1632 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-603000"
	I0823 11:12:43.239682    1632 addons.go:231] Setting addon default-storageclass=true in "addons-603000"
	I0823 11:12:43.239679    1632 addons.go:467] Verifying addon registry=true in "addons-603000"
	I0823 11:12:43.239684    1632 addons.go:467] Verifying addon metrics-server=true in "addons-603000"
	I0823 11:12:43.241522    1632 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0823 11:12:43.257706    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:43.257792    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 11:12:43.259109    1632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0823 11:12:43.263610    1632 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0823 11:12:43.264359    1632 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0823 11:12:43.269028    1632 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0823 11:12:43.275582    1632 out.go:177] * Verifying registry addon...
	I0823 11:12:43.281668    1632 out.go:177] * Verifying csi-hostpath-driver addon...
	I0823 11:12:43.281674    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0823 11:12:43.281680    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0823 11:12:43.281692    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0823 11:12:43.288063    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0823 11:12:43.293691    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:43.293700    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:43.293699    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:43.294283    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0823 11:12:43.297149    1632 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0823 11:12:43.298922    1632 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0823 11:12:43.392760    1632 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0823 11:12:43.392773    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0823 11:12:43.396919    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0823 11:12:43.403660    1632 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0823 11:12:43.403671    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0823 11:12:43.426051    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0823 11:12:43.442459    1632 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0823 11:12:43.442469    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0823 11:12:43.465118    1632 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0823 11:12:43.465127    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0823 11:12:43.485410    1632 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0823 11:12:43.485421    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0823 11:12:43.505701    1632 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0823 11:12:43.505712    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0823 11:12:43.510666    1632 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0823 11:12:43.510672    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0823 11:12:43.522915    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0823 11:12:43.857341    1632 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0823 11:12:43.857696    1632 node_ready.go:35] waiting up to 6m0s for node "addons-603000" to be "Ready" ...
	I0823 11:12:43.859423    1632 node_ready.go:49] node "addons-603000" has status "Ready":"True"
	I0823 11:12:43.859431    1632 node_ready.go:38] duration metric: took 1.726334ms waiting for node "addons-603000" to be "Ready" ...
	I0823 11:12:43.859435    1632 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:12:43.862557    1632 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace to be "Ready" ...
	I0823 11:12:45.872773    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:47.873129    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:49.841300    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0823 11:12:49.841316    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:49.876535    1632 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0823 11:12:49.882169    1632 addons.go:231] Setting addon gcp-auth=true in "addons-603000"
	I0823 11:12:49.882191    1632 host.go:66] Checking if "addons-603000" exists ...
	I0823 11:12:49.882982    1632 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0823 11:12:49.882989    1632 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/addons-603000/id_rsa Username:docker}
	I0823 11:12:49.913095    1632 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0823 11:12:49.916035    1632 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0823 11:12:49.920011    1632 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0823 11:12:49.920017    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0823 11:12:49.927312    1632 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0823 11:12:49.927320    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0823 11:12:49.934731    1632 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0823 11:12:49.934739    1632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0823 11:12:49.943537    1632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0823 11:12:50.372936    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:50.550032    1632 addons.go:467] Verifying addon gcp-auth=true in "addons-603000"
	I0823 11:12:50.554005    1632 out.go:177] * Verifying gcp-auth addon...
	I0823 11:12:50.561304    1632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0823 11:12:50.564772    1632 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0823 11:12:50.564779    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:50.566533    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:51.070598    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:51.570353    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:52.070168    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:52.570325    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:52.873631    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:53.069304    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:53.570391    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:54.068627    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:54.570250    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:55.068767    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:55.371099    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:55.570140    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:56.070737    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:56.570638    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:57.070296    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:57.374140    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:12:57.570286    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:58.070358    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:58.569814    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:59.070104    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:59.570841    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:12:59.872601    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:00.070509    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:00.570371    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:01.070409    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:01.569429    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:02.069671    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:02.372554    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:02.571673    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:03.069684    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:03.570027    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:04.070029    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:04.569893    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:04.872229    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:05.069764    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:05.569750    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:06.070699    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:06.569829    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:06.872245    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:07.070111    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:07.569767    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:08.069905    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:08.571644    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:09.069600    1632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0823 11:13:09.372178    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:09.569699    1632 kapi.go:107] duration metric: took 19.008846917s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0823 11:13:09.574491    1632 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-603000 cluster.
	I0823 11:13:09.578500    1632 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0823 11:13:09.582495    1632 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0823 11:13:11.372160    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:13.871903    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:15.872253    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:18.371986    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:20.372421    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:22.871118    1632 pod_ready.go:102] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"False"
	I0823 11:13:23.871704    1632 pod_ready.go:92] pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.871715    1632 pod_ready.go:81] duration metric: took 40.010097208s waiting for pod "coredns-5dd5756b68-blj5d" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.871719    1632 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.872661    1632 pod_ready.go:97] error getting pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-mfwg5" not found
	I0823 11:13:23.872669    1632 pod_ready.go:81] duration metric: took 947.666µs waiting for pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace to be "Ready" ...
	E0823 11:13:23.872673    1632 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-mfwg5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-mfwg5" not found
	I0823 11:13:23.872689    1632 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.875032    1632 pod_ready.go:92] pod "etcd-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.875037    1632 pod_ready.go:81] duration metric: took 2.343625ms waiting for pod "etcd-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.875040    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.877331    1632 pod_ready.go:92] pod "kube-apiserver-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.877336    1632 pod_ready.go:81] duration metric: took 2.293291ms waiting for pod "kube-apiserver-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.877340    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.879623    1632 pod_ready.go:92] pod "kube-controller-manager-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:23.879629    1632 pod_ready.go:81] duration metric: took 2.287416ms waiting for pod "kube-controller-manager-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:23.879633    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wxvd5" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.072856    1632 pod_ready.go:92] pod "kube-proxy-wxvd5" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:24.072864    1632 pod_ready.go:81] duration metric: took 193.233541ms waiting for pod "kube-proxy-wxvd5" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.072869    1632 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.472997    1632 pod_ready.go:92] pod "kube-scheduler-addons-603000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:13:24.473008    1632 pod_ready.go:81] duration metric: took 400.145667ms waiting for pod "kube-scheduler-addons-603000" in "kube-system" namespace to be "Ready" ...
	I0823 11:13:24.473012    1632 pod_ready.go:38] duration metric: took 40.614530833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:13:24.473025    1632 api_server.go:52] waiting for apiserver process to appear ...
	I0823 11:13:24.473121    1632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 11:13:24.478605    1632 api_server.go:72] duration metric: took 41.239942375s to wait for apiserver process to appear ...
	I0823 11:13:24.478610    1632 api_server.go:88] waiting for apiserver healthz status ...
	I0823 11:13:24.478617    1632 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0823 11:13:24.482269    1632 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0823 11:13:24.482906    1632 api_server.go:141] control plane version: v1.28.0
	I0823 11:13:24.482912    1632 api_server.go:131] duration metric: took 4.29975ms to wait for apiserver health ...
	I0823 11:13:24.482914    1632 system_pods.go:43] waiting for kube-system pods to appear ...
	I0823 11:13:24.674508    1632 system_pods.go:59] 7 kube-system pods found
	I0823 11:13:24.674519    1632 system_pods.go:61] "coredns-5dd5756b68-blj5d" [1fb66ddb-fafe-4785-b040-d0fd2587b3ae] Running
	I0823 11:13:24.674521    1632 system_pods.go:61] "etcd-addons-603000" [82b2b170-9259-4355-b5e0-2cf094533743] Running
	I0823 11:13:24.674524    1632 system_pods.go:61] "kube-apiserver-addons-603000" [1f3f8b80-c0bb-4e12-80fa-d87f557ae07a] Running
	I0823 11:13:24.674526    1632 system_pods.go:61] "kube-controller-manager-addons-603000" [726ada4c-a3da-4bac-99f2-83c60531a97e] Running
	I0823 11:13:24.674531    1632 system_pods.go:61] "kube-ingress-dns-minikube" [d0fb563e-9f6f-4cbe-b232-2f4956e99b67] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0823 11:13:24.674534    1632 system_pods.go:61] "kube-proxy-wxvd5" [7632c8d9-6979-41bb-a5a1-88f356962d6b] Running
	I0823 11:13:24.674539    1632 system_pods.go:61] "kube-scheduler-addons-603000" [606497e8-7ccf-4730-91d4-a1b0bb5f1c4f] Running
	I0823 11:13:24.674542    1632 system_pods.go:74] duration metric: took 191.630375ms to wait for pod list to return data ...
	I0823 11:13:24.674546    1632 default_sa.go:34] waiting for default service account to be created ...
	I0823 11:13:24.872182    1632 default_sa.go:45] found service account: "default"
	I0823 11:13:24.872197    1632 default_sa.go:55] duration metric: took 197.644583ms for default service account to be created ...
	I0823 11:13:24.872203    1632 system_pods.go:116] waiting for k8s-apps to be running ...
	I0823 11:13:25.073612    1632 system_pods.go:86] 7 kube-system pods found
	I0823 11:13:25.073624    1632 system_pods.go:89] "coredns-5dd5756b68-blj5d" [1fb66ddb-fafe-4785-b040-d0fd2587b3ae] Running
	I0823 11:13:25.073627    1632 system_pods.go:89] "etcd-addons-603000" [82b2b170-9259-4355-b5e0-2cf094533743] Running
	I0823 11:13:25.073629    1632 system_pods.go:89] "kube-apiserver-addons-603000" [1f3f8b80-c0bb-4e12-80fa-d87f557ae07a] Running
	I0823 11:13:25.073632    1632 system_pods.go:89] "kube-controller-manager-addons-603000" [726ada4c-a3da-4bac-99f2-83c60531a97e] Running
	I0823 11:13:25.073636    1632 system_pods.go:89] "kube-ingress-dns-minikube" [d0fb563e-9f6f-4cbe-b232-2f4956e99b67] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0823 11:13:25.073641    1632 system_pods.go:89] "kube-proxy-wxvd5" [7632c8d9-6979-41bb-a5a1-88f356962d6b] Running
	I0823 11:13:25.073644    1632 system_pods.go:89] "kube-scheduler-addons-603000" [606497e8-7ccf-4730-91d4-a1b0bb5f1c4f] Running
	I0823 11:13:25.073647    1632 system_pods.go:126] duration metric: took 201.446458ms to wait for k8s-apps to be running ...
	I0823 11:13:25.073650    1632 system_svc.go:44] waiting for kubelet service to be running ....
	I0823 11:13:25.073715    1632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 11:13:25.078691    1632 system_svc.go:56] duration metric: took 5.037209ms WaitForService to wait for kubelet.
	I0823 11:13:25.078704    1632 kubeadm.go:581] duration metric: took 41.84005475s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0823 11:13:25.078718    1632 node_conditions.go:102] verifying NodePressure condition ...
	I0823 11:13:25.271033    1632 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0823 11:13:25.271069    1632 node_conditions.go:123] node cpu capacity is 2
	I0823 11:13:25.271075    1632 node_conditions.go:105] duration metric: took 192.359208ms to run NodePressure ...
	I0823 11:13:25.271080    1632 start.go:228] waiting for startup goroutines ...
	I0823 11:18:43.244336    1632 kapi.go:107] duration metric: took 6m0.011535667s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0823 11:18:43.244405    1632 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0823 11:18:43.298298    1632 kapi.go:107] duration metric: took 6m0.001378042s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0823 11:18:43.298326    1632 kapi.go:107] duration metric: took 6m0.00762275s to wait for kubernetes.io/minikube-addons=registry ...
	W0823 11:18:43.298353    1632 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	W0823 11:18:43.298328    1632 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0823 11:18:43.304474    1632 out.go:177] * Enabled addons: volumesnapshots, cloud-spanner, storage-provisioner, metrics-server, ingress-dns, default-storageclass, inspektor-gadget, gcp-auth
	I0823 11:18:43.311559    1632 addons.go:502] enable addons completed in 6m0.094130167s: enabled=[volumesnapshots cloud-spanner storage-provisioner metrics-server ingress-dns default-storageclass inspektor-gadget gcp-auth]
	I0823 11:18:43.311576    1632 start.go:233] waiting for cluster config update ...
	I0823 11:18:43.311585    1632 start.go:242] writing updated cluster config ...
	I0823 11:18:43.312012    1632 ssh_runner.go:195] Run: rm -f paused
	I0823 11:18:43.342628    1632 start.go:600] kubectl: 1.27.2, cluster: 1.28.0 (minor skew: 1)
	I0823 11:18:43.345571    1632 out.go:177] * Done! kubectl is now configured to use "addons-603000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-08-23 18:12:13 UTC, ends at Wed 2023-08-23 18:32:23 UTC. --
	Aug 23 18:23:34 addons-603000 dockerd[1103]: time="2023-08-23T18:23:34.114537177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:23:34 addons-603000 dockerd[1103]: time="2023-08-23T18:23:34.114541260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:23:34 addons-603000 dockerd[1097]: time="2023-08-23T18:23:34.187824843Z" level=info msg="ignoring event" container=33b6b60ed9e557546bf1703af8eea1e09157e1e00441ca9a3dc016e3c9cd76e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:23:34 addons-603000 dockerd[1103]: time="2023-08-23T18:23:34.187820301Z" level=info msg="shim disconnected" id=33b6b60ed9e557546bf1703af8eea1e09157e1e00441ca9a3dc016e3c9cd76e5 namespace=moby
	Aug 23 18:23:34 addons-603000 dockerd[1103]: time="2023-08-23T18:23:34.188065590Z" level=warning msg="cleaning up after shim disconnected" id=33b6b60ed9e557546bf1703af8eea1e09157e1e00441ca9a3dc016e3c9cd76e5 namespace=moby
	Aug 23 18:23:34 addons-603000 dockerd[1103]: time="2023-08-23T18:23:34.188086340Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:28:41 addons-603000 dockerd[1103]: time="2023-08-23T18:28:41.094207801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:28:41 addons-603000 dockerd[1103]: time="2023-08-23T18:28:41.094239634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:28:41 addons-603000 dockerd[1103]: time="2023-08-23T18:28:41.094428221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:28:41 addons-603000 dockerd[1103]: time="2023-08-23T18:28:41.094440262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:28:41 addons-603000 dockerd[1097]: time="2023-08-23T18:28:41.143468399Z" level=info msg="ignoring event" container=0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:28:41 addons-603000 dockerd[1103]: time="2023-08-23T18:28:41.143514441Z" level=info msg="shim disconnected" id=0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98 namespace=moby
	Aug 23 18:28:41 addons-603000 dockerd[1103]: time="2023-08-23T18:28:41.143542817Z" level=warning msg="cleaning up after shim disconnected" id=0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98 namespace=moby
	Aug 23 18:28:41 addons-603000 dockerd[1103]: time="2023-08-23T18:28:41.143546942Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:30:44 addons-603000 dockerd[1103]: time="2023-08-23T18:30:44.885219159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:30:44 addons-603000 dockerd[1103]: time="2023-08-23T18:30:44.885249158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:30:44 addons-603000 dockerd[1103]: time="2023-08-23T18:30:44.885257574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:30:44 addons-603000 dockerd[1103]: time="2023-08-23T18:30:44.885263907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:30:44 addons-603000 cri-dockerd[995]: time="2023-08-23T18:30:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1ce470d3ac5dbd8639ca47c5556985a38ebc27169c1b2e8fafd5221adaafb8f7/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 23 18:30:45 addons-603000 dockerd[1097]: time="2023-08-23T18:30:45.233004021Z" level=warning msg="reference for unknown type: " digest="sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98" remote="ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98"
	Aug 23 18:30:49 addons-603000 cri-dockerd[995]: time="2023-08-23T18:30:49Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.19.0@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98"
	Aug 23 18:30:49 addons-603000 dockerd[1103]: time="2023-08-23T18:30:49.216878665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:30:49 addons-603000 dockerd[1103]: time="2023-08-23T18:30:49.216925454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:30:49 addons-603000 dockerd[1103]: time="2023-08-23T18:30:49.216931996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:30:49 addons-603000 dockerd[1103]: time="2023-08-23T18:30:49.216936204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                               CREATED              STATE               NAME                      ATTEMPT             POD ID
	c3aac1112826c       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98               About a minute ago   Running             headlamp                  0                   1ce470d3ac5db
	0f90525fcf0bb       1499ed4fbd0aa                                                                                                       3 minutes ago        Exited              minikube-ingress-dns      8                   ee55b672623b1
	e852eb8754f9d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf        19 minutes ago       Running             gcp-auth                  0                   b3f97bd7715ac
	db988e4e18bfe       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:900ee9d94e12c15caebf0f198a51b532e6e2e373f6adfdef6fc78d11f6ecc6ac   19 minutes ago       Running             gadget                    0                   12623df028552
	20ee50fe57df9       940f54a5bcae9                                                                                                       19 minutes ago       Running             kube-proxy                0                   046b3dd276694
	eb22253f397d0       97e04611ad434                                                                                                       19 minutes ago       Running             coredns                   0                   83362738c741a
	695ccc939e29c       762dce4090c5f                                                                                                       19 minutes ago       Running             kube-scheduler            0                   cd7498705ccfc
	0d85c39c58b71       9cdd6470f48c8                                                                                                       19 minutes ago       Running             etcd                      0                   52014f9a89d58
	9700e59743269       46cc66ccc7c19                                                                                                       19 minutes ago       Running             kube-controller-manager   0                   0e19c8d65b488
	594c65a938703       00543d2fe5d71                                                                                                       19 minutes ago       Running             kube-apiserver            0                   e5952ff9dbb6d
	
	* 
	* ==> coredns [eb22253f397d] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:48978 - 47852 "HINFO IN 1860146809240780526.8201528298968188454. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004478075s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-603000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-603000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b0cad40b2c48c073112e0845fd1037a030ab0128
	                    minikube.k8s.io/name=addons-603000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_23T11_12_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 23 Aug 2023 18:12:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-603000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 23 Aug 2023 18:32:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 23 Aug 2023 18:31:02 +0000   Wed, 23 Aug 2023 18:12:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 23 Aug 2023 18:31:02 +0000   Wed, 23 Aug 2023 18:12:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 23 Aug 2023 18:31:02 +0000   Wed, 23 Aug 2023 18:12:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 23 Aug 2023 18:31:02 +0000   Wed, 23 Aug 2023 18:12:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-603000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 d563976ae62b43f8a1fb53a177d29cdd
	  System UUID:                d563976ae62b43f8a1fb53a177d29cdd
	  Boot ID:                    95f14d55-f8c9-49e4-bb2e-505c410326e7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-ndv9t                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  gcp-auth                    gcp-auth-d4c87556c-k5hw7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  headlamp                    headlamp-699c48fb74-fhmj2                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 coredns-5dd5756b68-blj5d                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     19m
	  kube-system                 etcd-addons-603000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-addons-603000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-addons-603000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-ingress-dns-minikube                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-wxvd5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-addons-603000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 19m   kube-proxy       
	  Normal  Starting                 19m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m   kubelet          Node addons-603000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m   kubelet          Node addons-603000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m   kubelet          Node addons-603000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m   kubelet          Node addons-603000 status is now: NodeReady
	  Normal  RegisteredNode           19m   node-controller  Node addons-603000 event: Registered Node addons-603000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.645801] EINJ: EINJ table not found.
	[  +0.508913] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043174] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000868] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.060149] systemd-fstab-generator[476]: Ignoring "noauto" for root device
	[  +0.082417] systemd-fstab-generator[487]: Ignoring "noauto" for root device
	[  +0.495013] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.161467] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +0.082809] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[  +0.085797] systemd-fstab-generator[726]: Ignoring "noauto" for root device
	[  +1.141175] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.084612] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +0.080569] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +0.082356] systemd-fstab-generator[935]: Ignoring "noauto" for root device
	[  +0.081344] systemd-fstab-generator[946]: Ignoring "noauto" for root device
	[  +0.093418] systemd-fstab-generator[988]: Ignoring "noauto" for root device
	[  +2.547960] systemd-fstab-generator[1090]: Ignoring "noauto" for root device
	[  +2.110352] kauditd_printk_skb: 29 callbacks suppressed
	[  +2.040486] systemd-fstab-generator[1407]: Ignoring "noauto" for root device
	[  +4.639082] systemd-fstab-generator[2281]: Ignoring "noauto" for root device
	[ +13.918576] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.422998] kauditd_printk_skb: 19 callbacks suppressed
	[  +9.996433] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Aug23 18:13] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.103613] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [0d85c39c58b7] <==
	* {"level":"info","ts":"2023-08-23T18:12:26.543189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-08-23T18:12:26.543213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-08-23T18:12:26.54323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-08-23T18:12:26.543269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-08-23T18:12:26.550684Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:12:26.550945Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-603000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-23T18:12:26.551005Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-23T18:12:26.551568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-08-23T18:12:26.555059Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:12:26.555101Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:12:26.555137Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:12:26.551069Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-23T18:12:26.555541Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-23T18:12:26.563277Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-23T18:12:26.563307Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-08-23T18:12:48.393724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.007861ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-blj5d\" ","response":"range_response_count:1 size:4755"}
	{"level":"info","ts":"2023-08-23T18:12:48.39376Z","caller":"traceutil/trace.go:171","msg":"trace[2046528636] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-blj5d; range_end:; response_count:1; response_revision:423; }","duration":"169.059532ms","start":"2023-08-23T18:12:48.224693Z","end":"2023-08-23T18:12:48.393753Z","steps":["trace[2046528636] 'range keys from in-memory index tree'  (duration: 168.95968ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-23T18:12:48.39392Z","caller":"traceutil/trace.go:171","msg":"trace[720409665] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"100.636151ms","start":"2023-08-23T18:12:48.29328Z","end":"2023-08-23T18:12:48.393917Z","steps":["trace[720409665] 'process raft request'  (duration: 77.612181ms)","trace[720409665] 'compare'  (duration: 22.773829ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-23T18:12:50.777156Z","caller":"traceutil/trace.go:171","msg":"trace[471484672] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"104.054592ms","start":"2023-08-23T18:12:50.673089Z","end":"2023-08-23T18:12:50.777143Z","steps":["trace[471484672] 'process raft request'  (duration: 103.971962ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-23T18:22:26.650207Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":655}
	{"level":"info","ts":"2023-08-23T18:22:26.650931Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":655,"took":"518.817µs","hash":1262660470}
	{"level":"info","ts":"2023-08-23T18:22:26.650949Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1262660470,"revision":655,"compact-revision":-1}
	{"level":"info","ts":"2023-08-23T18:27:26.652533Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":748}
	{"level":"info","ts":"2023-08-23T18:27:26.652937Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":748,"took":"267.17µs","hash":3279716501}
	{"level":"info","ts":"2023-08-23T18:27:26.652952Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3279716501,"revision":748,"compact-revision":655}
	
	* 
	* ==> gcp-auth [e852eb8754f9] <==
	* 2023/08/23 18:13:08 GCP Auth Webhook started!
	2023/08/23 18:13:18 failed to get releases file: Get "https://storage.googleapis.com/minikube-gcp-auth/releases.json": dial tcp: lookup storage.googleapis.com on 10.96.0.10:53: read udp 10.244.0.5:60048->10.96.0.10:53: read: connection refused
	2023/08/23 18:30:44 Ready to marshal response ...
	2023/08/23 18:30:44 Ready to write response ...
	2023/08/23 18:30:44 Ready to marshal response ...
	2023/08/23 18:30:44 Ready to write response ...
	2023/08/23 18:30:44 Ready to marshal response ...
	2023/08/23 18:30:44 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:32:23 up 20 min,  0 users,  load average: 0.29, 0.21, 0.12
	Linux addons-603000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [594c65a93870] <==
	* I0823 18:12:27.437034       1 cache.go:39] Caches are synced for autoregister controller
	I0823 18:12:27.443118       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0823 18:12:27.481999       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0823 18:12:28.325487       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0823 18:12:28.327337       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0823 18:12:28.327348       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0823 18:12:28.483580       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0823 18:12:28.494448       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0823 18:12:28.528987       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0823 18:12:28.530764       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.2]
	I0823 18:12:28.531188       1 controller.go:624] quota admission added evaluator for: endpoints
	I0823 18:12:28.532349       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0823 18:12:29.394948       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0823 18:12:30.000403       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0823 18:12:30.005168       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0823 18:12:30.011907       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0823 18:12:42.297794       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0823 18:12:42.999693       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0823 18:12:43.740564       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:12:50.388002       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.104.33.188"}
	I0823 18:12:50.406209       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0823 18:17:27.368575       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:22:27.369052       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:27:27.369080       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0823 18:30:44.504708       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.202.15"}
	
	* 
	* ==> kube-controller-manager [9700e5974326] <==
	* I0823 18:13:05.491052       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0823 18:13:05.492818       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0823 18:13:05.492904       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0823 18:13:09.490531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="5.007096ms"
	I0823 18:13:09.490673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="12.225µs"
	I0823 18:13:12.451688       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="traces.gadget.kinvolk.io"
	I0823 18:13:12.451711       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0823 18:13:12.552518       1 shared_informer.go:318] Caches are synced for resource quota
	I0823 18:13:12.777696       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0823 18:13:12.777713       1 shared_informer.go:318] Caches are synced for garbage collector
	I0823 18:13:23.581296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="4.891249ms"
	I0823 18:13:23.581368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.846µs"
	I0823 18:13:35.007690       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0823 18:13:35.008253       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0823 18:13:35.019213       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0823 18:13:35.019817       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0823 18:30:44.515595       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-699c48fb74 to 1"
	I0823 18:30:44.537411       1 event.go:307] "Event occurred" object="headlamp/headlamp-699c48fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-699c48fb74-fhmj2"
	I0823 18:30:44.546980       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="31.640595ms"
	I0823 18:30:44.550301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="3.298736ms"
	I0823 18:30:44.550329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="13.79µs"
	I0823 18:30:44.562652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="41.165µs"
	I0823 18:30:50.261433       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="18.874µs"
	I0823 18:30:50.270891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="4.914237ms"
	I0823 18:30:50.271079       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="66.247µs"
	
	* 
	* ==> kube-proxy [20ee50fe57df] <==
	* I0823 18:12:44.324386       1 server_others.go:69] "Using iptables proxy"
	I0823 18:12:44.336042       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0823 18:12:44.344520       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0823 18:12:44.344533       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0823 18:12:44.345190       1 server_others.go:152] "Using iptables Proxier"
	I0823 18:12:44.345242       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0823 18:12:44.345409       1 server.go:846] "Version info" version="v1.28.0"
	I0823 18:12:44.345419       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0823 18:12:44.346019       1 config.go:188] "Starting service config controller"
	I0823 18:12:44.346074       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0823 18:12:44.346113       1 config.go:97] "Starting endpoint slice config controller"
	I0823 18:12:44.346120       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0823 18:12:44.346526       1 config.go:315] "Starting node config controller"
	I0823 18:12:44.346554       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0823 18:12:44.446872       1 shared_informer.go:318] Caches are synced for node config
	I0823 18:12:44.446887       1 shared_informer.go:318] Caches are synced for service config
	I0823 18:12:44.446897       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [695ccc939e29] <==
	* W0823 18:12:27.662067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0823 18:12:27.662073       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0823 18:12:27.662103       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662127       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662242       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0823 18:12:27.662264       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0823 18:12:27.662344       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662351       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0823 18:12:27.662399       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0823 18:12:27.662439       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0823 18:12:27.662446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0823 18:12:27.662483       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0823 18:12:27.662491       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0823 18:12:27.662520       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0823 18:12:27.662530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0823 18:12:27.662556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0823 18:12:27.662568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0823 18:12:27.662623       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0823 18:12:27.662630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0823 18:12:27.662674       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0823 18:12:27.662681       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0823 18:12:28.959674       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-23 18:12:13 UTC, ends at Wed 2023-08-23 18:32:23 UTC. --
	Aug 23 18:30:44 addons-603000 kubelet[2300]: I0823 18:30:44.546044    2300 memory_manager.go:346] "RemoveStaleState removing state" podUID="837a7b11-2324-4d35-8b45-60b54a0a437d" containerName="patch"
	Aug 23 18:30:44 addons-603000 kubelet[2300]: I0823 18:30:44.546048    2300 memory_manager.go:346] "RemoveStaleState removing state" podUID="4714c96f-dbff-488c-b433-b2b82aa46082" containerName="create"
	Aug 23 18:30:44 addons-603000 kubelet[2300]: I0823 18:30:44.646316    2300 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nwpf\" (UniqueName: \"kubernetes.io/projected/c34b494a-df64-4fd0-9ff9-0b52399c8158-kube-api-access-5nwpf\") pod \"headlamp-699c48fb74-fhmj2\" (UID: \"c34b494a-df64-4fd0-9ff9-0b52399c8158\") " pod="headlamp/headlamp-699c48fb74-fhmj2"
	Aug 23 18:30:44 addons-603000 kubelet[2300]: I0823 18:30:44.646339    2300 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c34b494a-df64-4fd0-9ff9-0b52399c8158-gcp-creds\") pod \"headlamp-699c48fb74-fhmj2\" (UID: \"c34b494a-df64-4fd0-9ff9-0b52399c8158\") " pod="headlamp/headlamp-699c48fb74-fhmj2"
	Aug 23 18:30:50 addons-603000 kubelet[2300]: I0823 18:30:50.265681    2300 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="headlamp/headlamp-699c48fb74-fhmj2" podStartSLOduration=2.130440922 podCreationTimestamp="2023-08-23 18:30:44 +0000 UTC" firstStartedPulling="2023-08-23 18:30:45.012977308 +0000 UTC m=+1095.027237801" lastFinishedPulling="2023-08-23 18:30:49.148193191 +0000 UTC m=+1099.162453726" observedRunningTime="2023-08-23 18:30:50.260177974 +0000 UTC m=+1100.274438508" watchObservedRunningTime="2023-08-23 18:30:50.265656847 +0000 UTC m=+1100.279917382"
	Aug 23 18:30:52 addons-603000 kubelet[2300]: I0823 18:30:52.071581    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:30:52 addons-603000 kubelet[2300]: E0823 18:30:52.071933    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:31:05 addons-603000 kubelet[2300]: I0823 18:31:05.071033    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:31:05 addons-603000 kubelet[2300]: E0823 18:31:05.071210    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:31:17 addons-603000 kubelet[2300]: I0823 18:31:17.071487    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:31:17 addons-603000 kubelet[2300]: E0823 18:31:17.071633    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:31:30 addons-603000 kubelet[2300]: E0823 18:31:30.076193    2300 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 23 18:31:30 addons-603000 kubelet[2300]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 23 18:31:30 addons-603000 kubelet[2300]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 23 18:31:30 addons-603000 kubelet[2300]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 23 18:31:31 addons-603000 kubelet[2300]: I0823 18:31:31.070986    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:31:31 addons-603000 kubelet[2300]: E0823 18:31:31.071129    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:31:45 addons-603000 kubelet[2300]: I0823 18:31:45.071560    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:31:45 addons-603000 kubelet[2300]: E0823 18:31:45.071711    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:31:57 addons-603000 kubelet[2300]: I0823 18:31:57.071815    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:31:57 addons-603000 kubelet[2300]: E0823 18:31:57.071952    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:32:09 addons-603000 kubelet[2300]: I0823 18:32:09.071369    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:32:09 addons-603000 kubelet[2300]: E0823 18:32:09.071608    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	Aug 23 18:32:22 addons-603000 kubelet[2300]: I0823 18:32:22.071580    2300 scope.go:117] "RemoveContainer" containerID="0f90525fcf0bb5e01f6dc6f75ad2029a4da1e8577834d9b39145b33c5921cd98"
	Aug 23 18:32:22 addons-603000 kubelet[2300]: E0823 18:32:22.071721    2300 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(d0fb563e-9f6f-4cbe-b232-2f4956e99b67)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="d0fb563e-9f6f-4cbe-b232-2f4956e99b67"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-603000 -n addons-603000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-603000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CloudSpanner FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CloudSpanner (820.39s)

                                                
                                    
x
+
TestCertOptions (9.95s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-963000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-963000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.670522333s)

                                                
                                                
-- stdout --
	* [cert-options-963000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-963000 in cluster cert-options-963000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-963000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-963000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-963000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-963000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (80.728041ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-963000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-963000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-963000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-963000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-963000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (40.740292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-963000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-963000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-963000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-08-23 11:58:00.986413 -0700 PDT m=+2799.239045584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-963000 -n cert-options-963000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-963000 -n cert-options-963000: exit status 7 (29.46075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-963000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-963000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-963000
--- FAIL: TestCertOptions (9.95s)
E0823 11:58:13.899772    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
E0823 11:58:39.552022    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
E0823 11:58:43.233413    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory

                                                
                                    
x
+
TestCertExpiration (195.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-571000 --memory=2048 --cert-expiration=3m --driver=qemu2 
E0823 11:57:46.190599    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-571000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.965358542s)

                                                
                                                
-- stdout --
	* [cert-expiration-571000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-571000 in cluster cert-expiration-571000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-571000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-571000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-571000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-571000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.223987459s)

                                                
                                                
-- stdout --
	* [cert-expiration-571000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-571000 in cluster cert-expiration-571000
	* Restarting existing qemu2 VM for "cert-expiration-571000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-571000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-571000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-571000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-571000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-571000 in cluster cert-expiration-571000
	* Restarting existing qemu2 VM for "cert-expiration-571000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-571000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-571000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-08-23 12:01:01.027048 -0700 PDT m=+2979.285805667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-571000 -n cert-expiration-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-571000 -n cert-expiration-571000: exit status 7 (68.109459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-571000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-571000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-571000
--- FAIL: TestCertExpiration (195.36s)

                                                
                                    
x
+
TestDockerFlags (10.34s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-007000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-007000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.090366541s)

                                                
                                                
-- stdout --
	* [docker-flags-007000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-007000 in cluster docker-flags-007000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-007000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:57:40.849952    4024 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:57:40.850065    4024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:57:40.850068    4024 out.go:309] Setting ErrFile to fd 2...
	I0823 11:57:40.850070    4024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:57:40.850177    4024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:57:40.851177    4024 out.go:303] Setting JSON to false
	I0823 11:57:40.866326    4024 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3434,"bootTime":1692813626,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:57:40.866406    4024 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:57:40.871303    4024 out.go:177] * [docker-flags-007000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:57:40.878352    4024 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:57:40.882255    4024 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:57:40.878404    4024 notify.go:220] Checking for updates...
	I0823 11:57:40.888308    4024 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:57:40.891266    4024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:57:40.894310    4024 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:57:40.897306    4024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:57:40.900581    4024 config.go:182] Loaded profile config "force-systemd-flag-089000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:57:40.900646    4024 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:57:40.900687    4024 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:57:40.905280    4024 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 11:57:40.911269    4024 start.go:298] selected driver: qemu2
	I0823 11:57:40.911274    4024 start.go:902] validating driver "qemu2" against <nil>
	I0823 11:57:40.911280    4024 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:57:40.913244    4024 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 11:57:40.916258    4024 out.go:177] * Automatically selected the socket_vmnet network
	I0823 11:57:40.919385    4024 start_flags.go:914] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0823 11:57:40.919410    4024 cni.go:84] Creating CNI manager for ""
	I0823 11:57:40.919421    4024 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:57:40.919425    4024 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 11:57:40.919431    4024 start_flags.go:319] config:
	{Name:docker-flags-007000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:docker-flags-007000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:57:40.923575    4024 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:57:40.930302    4024 out.go:177] * Starting control plane node docker-flags-007000 in cluster docker-flags-007000
	I0823 11:57:40.934279    4024 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:57:40.934301    4024 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 11:57:40.934313    4024 cache.go:57] Caching tarball of preloaded images
	I0823 11:57:40.934375    4024 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 11:57:40.934380    4024 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 11:57:40.934445    4024 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/docker-flags-007000/config.json ...
	I0823 11:57:40.934458    4024 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/docker-flags-007000/config.json: {Name:mkc8108c41be7b5d30164921028c3bef31203db5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:57:40.934678    4024 start.go:365] acquiring machines lock for docker-flags-007000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:57:40.934710    4024 start.go:369] acquired machines lock for "docker-flags-007000" in 25.834µs
	I0823 11:57:40.934722    4024 start.go:93] Provisioning new machine with config: &{Name:docker-flags-007000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:
22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:docker-flags-007000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:57:40.934754    4024 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:57:40.943411    4024 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0823 11:57:40.959765    4024 start.go:159] libmachine.API.Create for "docker-flags-007000" (driver="qemu2")
	I0823 11:57:40.959795    4024 client.go:168] LocalClient.Create starting
	I0823 11:57:40.959844    4024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:57:40.959872    4024 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:40.959883    4024 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:40.959929    4024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:57:40.959952    4024 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:40.959965    4024 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:40.960303    4024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:57:41.077382    4024 main.go:141] libmachine: Creating SSH key...
	I0823 11:57:41.208935    4024 main.go:141] libmachine: Creating Disk image...
	I0823 11:57:41.208942    4024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:57:41.209097    4024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/disk.qcow2
	I0823 11:57:41.217580    4024 main.go:141] libmachine: STDOUT: 
	I0823 11:57:41.217595    4024 main.go:141] libmachine: STDERR: 
	I0823 11:57:41.217649    4024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/disk.qcow2 +20000M
	I0823 11:57:41.224942    4024 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:57:41.224962    4024 main.go:141] libmachine: STDERR: 
	I0823 11:57:41.224981    4024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/disk.qcow2
	I0823 11:57:41.224987    4024 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:57:41.225030    4024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f7:ba:5f:fb:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/disk.qcow2
	I0823 11:57:41.226497    4024 main.go:141] libmachine: STDOUT: 
	I0823 11:57:41.226511    4024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:57:41.226534    4024 client.go:171] LocalClient.Create took 266.739042ms
	I0823 11:57:43.228660    4024 start.go:128] duration metric: createHost completed in 2.29395125s
	I0823 11:57:43.228753    4024 start.go:83] releasing machines lock for "docker-flags-007000", held for 2.294112375s
	W0823 11:57:43.228857    4024 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:57:43.252143    4024 out.go:177] * Deleting "docker-flags-007000" in qemu2 ...
	W0823 11:57:43.267304    4024 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:57:43.267396    4024 start.go:687] Will try again in 5 seconds ...
	I0823 11:57:48.269467    4024 start.go:365] acquiring machines lock for docker-flags-007000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:57:48.391679    4024 start.go:369] acquired machines lock for "docker-flags-007000" in 122.06275ms
	I0823 11:57:48.391853    4024 start.go:93] Provisioning new machine with config: &{Name:docker-flags-007000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:
22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:docker-flags-007000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:57:48.392187    4024 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:57:48.397678    4024 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0823 11:57:48.443760    4024 start.go:159] libmachine.API.Create for "docker-flags-007000" (driver="qemu2")
	I0823 11:57:48.443813    4024 client.go:168] LocalClient.Create starting
	I0823 11:57:48.443955    4024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:57:48.444014    4024 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:48.444033    4024 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:48.444101    4024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:57:48.444138    4024 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:48.444149    4024 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:48.444637    4024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:57:48.573322    4024 main.go:141] libmachine: Creating SSH key...
	I0823 11:57:48.854027    4024 main.go:141] libmachine: Creating Disk image...
	I0823 11:57:48.854037    4024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:57:48.854187    4024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/disk.qcow2
	I0823 11:57:48.862894    4024 main.go:141] libmachine: STDOUT: 
	I0823 11:57:48.862917    4024 main.go:141] libmachine: STDERR: 
	I0823 11:57:48.862998    4024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/disk.qcow2 +20000M
	I0823 11:57:48.870304    4024 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:57:48.870316    4024 main.go:141] libmachine: STDERR: 
	I0823 11:57:48.870346    4024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/disk.qcow2
	I0823 11:57:48.870361    4024 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:57:48.870413    4024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:67:35:f8:16:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/docker-flags-007000/disk.qcow2
	I0823 11:57:48.871938    4024 main.go:141] libmachine: STDOUT: 
	I0823 11:57:48.871952    4024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:57:48.871965    4024 client.go:171] LocalClient.Create took 428.157666ms
	I0823 11:57:50.874026    4024 start.go:128] duration metric: createHost completed in 2.481890542s
	I0823 11:57:50.874081    4024 start.go:83] releasing machines lock for "docker-flags-007000", held for 2.482449625s
	W0823 11:57:50.874511    4024 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-007000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-007000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:57:50.883122    4024 out.go:177] 
	W0823 11:57:50.887168    4024 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 11:57:50.887201    4024 out.go:239] * 
	* 
	W0823 11:57:50.890038    4024 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 11:57:50.897947    4024 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-007000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (79.795375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-007000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-007000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-007000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-007000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (45.419375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-007000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-007000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-007000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-007000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-08-23 11:57:51.039878 -0700 PDT m=+2789.292172584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-007000 -n docker-flags-007000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-007000 -n docker-flags-007000: exit status 7 (28.199125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-007000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-007000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-007000
--- FAIL: TestDockerFlags (10.34s)

                                                
                                    
x
+
TestForceSystemdFlag (12.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-089000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-089000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.833080958s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-089000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-089000 in cluster force-systemd-flag-089000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-089000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:57:33.820924    4002 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:57:33.821037    4002 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:57:33.821040    4002 out.go:309] Setting ErrFile to fd 2...
	I0823 11:57:33.821044    4002 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:57:33.821152    4002 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:57:33.822243    4002 out.go:303] Setting JSON to false
	I0823 11:57:33.837467    4002 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3427,"bootTime":1692813626,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:57:33.837529    4002 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:57:33.843209    4002 out.go:177] * [force-systemd-flag-089000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:57:33.850219    4002 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:57:33.854161    4002 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:57:33.850278    4002 notify.go:220] Checking for updates...
	I0823 11:57:33.860145    4002 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:57:33.863172    4002 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:57:33.866251    4002 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:57:33.869148    4002 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:57:33.872495    4002 config.go:182] Loaded profile config "force-systemd-env-262000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:57:33.872567    4002 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:57:33.872630    4002 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:57:33.877171    4002 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 11:57:33.884156    4002 start.go:298] selected driver: qemu2
	I0823 11:57:33.884160    4002 start.go:902] validating driver "qemu2" against <nil>
	I0823 11:57:33.884165    4002 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:57:33.886004    4002 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 11:57:33.889148    4002 out.go:177] * Automatically selected the socket_vmnet network
	I0823 11:57:33.890626    4002 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0823 11:57:33.890646    4002 cni.go:84] Creating CNI manager for ""
	I0823 11:57:33.890653    4002 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:57:33.890657    4002 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 11:57:33.890663    4002 start_flags.go:319] config:
	{Name:force-systemd-flag-089000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:force-systemd-flag-089000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:57:33.894631    4002 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:57:33.902169    4002 out.go:177] * Starting control plane node force-systemd-flag-089000 in cluster force-systemd-flag-089000
	I0823 11:57:33.906146    4002 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:57:33.906166    4002 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 11:57:33.906179    4002 cache.go:57] Caching tarball of preloaded images
	I0823 11:57:33.906264    4002 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 11:57:33.906270    4002 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 11:57:33.906340    4002 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/force-systemd-flag-089000/config.json ...
	I0823 11:57:33.906356    4002 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/force-systemd-flag-089000/config.json: {Name:mke90f5ae0940067777c2b52abfbd70dc21a152e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:57:33.906564    4002 start.go:365] acquiring machines lock for force-systemd-flag-089000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:57:33.906598    4002 start.go:369] acquired machines lock for "force-systemd-flag-089000" in 26.333µs
	I0823 11:57:33.906610    4002 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-089000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.28.0 ClusterName:force-systemd-flag-089000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:57:33.906642    4002 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:57:33.911206    4002 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0823 11:57:33.927440    4002 start.go:159] libmachine.API.Create for "force-systemd-flag-089000" (driver="qemu2")
	I0823 11:57:33.927469    4002 client.go:168] LocalClient.Create starting
	I0823 11:57:33.927525    4002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:57:33.927565    4002 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:33.927577    4002 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:33.927615    4002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:57:33.927634    4002 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:33.927643    4002 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:33.928004    4002 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:57:34.044115    4002 main.go:141] libmachine: Creating SSH key...
	I0823 11:57:34.287572    4002 main.go:141] libmachine: Creating Disk image...
	I0823 11:57:34.287583    4002 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:57:34.287805    4002 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/disk.qcow2
	I0823 11:57:34.297187    4002 main.go:141] libmachine: STDOUT: 
	I0823 11:57:34.297204    4002 main.go:141] libmachine: STDERR: 
	I0823 11:57:34.297258    4002 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/disk.qcow2 +20000M
	I0823 11:57:34.304558    4002 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:57:34.304570    4002 main.go:141] libmachine: STDERR: 
	I0823 11:57:34.304585    4002 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/disk.qcow2
	I0823 11:57:34.304594    4002 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:57:34.304642    4002 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:2d:42:b4:dc:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/disk.qcow2
	I0823 11:57:34.306148    4002 main.go:141] libmachine: STDOUT: 
	I0823 11:57:34.306160    4002 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:57:34.306189    4002 client.go:171] LocalClient.Create took 378.725208ms
	I0823 11:57:36.308276    4002 start.go:128] duration metric: createHost completed in 2.40169475s
	I0823 11:57:36.308334    4002 start.go:83] releasing machines lock for "force-systemd-flag-089000", held for 2.401809292s
	W0823 11:57:36.308404    4002 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:57:36.319493    4002 out.go:177] * Deleting "force-systemd-flag-089000" in qemu2 ...
	W0823 11:57:36.341036    4002 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:57:36.341067    4002 start.go:687] Will try again in 5 seconds ...
	I0823 11:57:41.341176    4002 start.go:365] acquiring machines lock for force-systemd-flag-089000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:57:43.228901    4002 start.go:369] acquired machines lock for "force-systemd-flag-089000" in 1.887726875s
	I0823 11:57:43.229121    4002 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-089000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.28.0 ClusterName:force-systemd-flag-089000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:57:43.229482    4002 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:57:43.242119    4002 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0823 11:57:43.289592    4002 start.go:159] libmachine.API.Create for "force-systemd-flag-089000" (driver="qemu2")
	I0823 11:57:43.289627    4002 client.go:168] LocalClient.Create starting
	I0823 11:57:43.289781    4002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:57:43.289847    4002 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:43.289869    4002 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:43.289962    4002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:57:43.290004    4002 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:43.290022    4002 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:43.290540    4002 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:57:43.420544    4002 main.go:141] libmachine: Creating SSH key...
	I0823 11:57:43.564563    4002 main.go:141] libmachine: Creating Disk image...
	I0823 11:57:43.564576    4002 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:57:43.564731    4002 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/disk.qcow2
	I0823 11:57:43.573780    4002 main.go:141] libmachine: STDOUT: 
	I0823 11:57:43.573794    4002 main.go:141] libmachine: STDERR: 
	I0823 11:57:43.573865    4002 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/disk.qcow2 +20000M
	I0823 11:57:43.580992    4002 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:57:43.581004    4002 main.go:141] libmachine: STDERR: 
	I0823 11:57:43.581019    4002 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/disk.qcow2
	I0823 11:57:43.581027    4002 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:57:43.581072    4002 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:09:0b:f6:99:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-flag-089000/disk.qcow2
	I0823 11:57:43.582536    4002 main.go:141] libmachine: STDOUT: 
	I0823 11:57:43.582549    4002 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:57:43.582561    4002 client.go:171] LocalClient.Create took 292.93875ms
	I0823 11:57:45.583633    4002 start.go:128] duration metric: createHost completed in 2.35419575s
	I0823 11:57:45.583728    4002 start.go:83] releasing machines lock for "force-systemd-flag-089000", held for 2.354847875s
	W0823 11:57:45.584135    4002 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-089000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-089000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:57:45.596055    4002 out.go:177] 
	W0823 11:57:45.599980    4002 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 11:57:45.600001    4002 out.go:239] * 
	* 
	W0823 11:57:45.602589    4002 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 11:57:45.612809    4002 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-089000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-089000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-089000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (79.575417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-089000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-089000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-08-23 11:57:45.708926 -0700 PDT m=+2783.961038917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-089000 -n force-systemd-flag-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-089000 -n force-systemd-flag-089000: exit status 7 (36.038916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-089000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-089000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-089000
--- FAIL: TestForceSystemdFlag (12.05s)

                                                
                                    
x
+
TestForceSystemdEnv (10.08s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-262000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-262000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.869720125s)

                                                
                                                
-- stdout --
	* [force-systemd-env-262000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-262000 in cluster force-systemd-env-262000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-262000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:57:30.768450    3983 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:57:30.768576    3983 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:57:30.768580    3983 out.go:309] Setting ErrFile to fd 2...
	I0823 11:57:30.768582    3983 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:57:30.768713    3983 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:57:30.769849    3983 out.go:303] Setting JSON to false
	I0823 11:57:30.786700    3983 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3424,"bootTime":1692813626,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:57:30.786755    3983 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:57:30.791600    3983 out.go:177] * [force-systemd-env-262000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:57:30.797520    3983 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:57:30.801495    3983 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:57:30.797551    3983 notify.go:220] Checking for updates...
	I0823 11:57:30.807454    3983 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:57:30.810487    3983 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:57:30.811899    3983 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:57:30.815472    3983 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0823 11:57:30.819459    3983 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:57:30.819552    3983 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:57:30.823316    3983 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 11:57:30.830494    3983 start.go:298] selected driver: qemu2
	I0823 11:57:30.830507    3983 start.go:902] validating driver "qemu2" against <nil>
	I0823 11:57:30.830516    3983 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:57:30.833046    3983 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 11:57:30.836468    3983 out.go:177] * Automatically selected the socket_vmnet network
	I0823 11:57:30.839569    3983 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0823 11:57:30.839599    3983 cni.go:84] Creating CNI manager for ""
	I0823 11:57:30.839606    3983 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:57:30.839611    3983 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 11:57:30.839620    3983 start_flags.go:319] config:
	{Name:force-systemd-env-262000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:force-systemd-env-262000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:57:30.844848    3983 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:57:30.851428    3983 out.go:177] * Starting control plane node force-systemd-env-262000 in cluster force-systemd-env-262000
	I0823 11:57:30.855446    3983 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:57:30.855466    3983 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 11:57:30.855477    3983 cache.go:57] Caching tarball of preloaded images
	I0823 11:57:30.855538    3983 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 11:57:30.855543    3983 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 11:57:30.855599    3983 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/force-systemd-env-262000/config.json ...
	I0823 11:57:30.855610    3983 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/force-systemd-env-262000/config.json: {Name:mk221f1c03fe795da8ed84dd0d160ff51f1ae1a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:57:30.855823    3983 start.go:365] acquiring machines lock for force-systemd-env-262000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:57:30.855853    3983 start.go:369] acquired machines lock for "force-systemd-env-262000" in 21.292µs
	I0823 11:57:30.855863    3983 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.0 ClusterName:force-systemd-env-262000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:57:30.855893    3983 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:57:30.860422    3983 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0823 11:57:30.874228    3983 start.go:159] libmachine.API.Create for "force-systemd-env-262000" (driver="qemu2")
	I0823 11:57:30.874258    3983 client.go:168] LocalClient.Create starting
	I0823 11:57:30.874321    3983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:57:30.874344    3983 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:30.874355    3983 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:30.874398    3983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:57:30.874416    3983 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:30.874421    3983 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:30.874736    3983 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:57:31.002236    3983 main.go:141] libmachine: Creating SSH key...
	I0823 11:57:31.060500    3983 main.go:141] libmachine: Creating Disk image...
	I0823 11:57:31.060507    3983 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:57:31.060640    3983 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/disk.qcow2
	I0823 11:57:31.069131    3983 main.go:141] libmachine: STDOUT: 
	I0823 11:57:31.069144    3983 main.go:141] libmachine: STDERR: 
	I0823 11:57:31.069209    3983 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/disk.qcow2 +20000M
	I0823 11:57:31.076461    3983 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:57:31.076480    3983 main.go:141] libmachine: STDERR: 
	I0823 11:57:31.076499    3983 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/disk.qcow2
	I0823 11:57:31.076505    3983 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:57:31.076542    3983 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:cc:46:c9:b5:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/disk.qcow2
	I0823 11:57:31.078088    3983 main.go:141] libmachine: STDOUT: 
	I0823 11:57:31.078101    3983 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:57:31.078118    3983 client.go:171] LocalClient.Create took 203.859125ms
	I0823 11:57:33.080246    3983 start.go:128] duration metric: createHost completed in 2.224410333s
	I0823 11:57:33.080309    3983 start.go:83] releasing machines lock for "force-systemd-env-262000", held for 2.224521959s
	W0823 11:57:33.080403    3983 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:57:33.087758    3983 out.go:177] * Deleting "force-systemd-env-262000" in qemu2 ...
	W0823 11:57:33.107812    3983 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:57:33.107845    3983 start.go:687] Will try again in 5 seconds ...
	I0823 11:57:38.109937    3983 start.go:365] acquiring machines lock for force-systemd-env-262000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:57:38.110441    3983 start.go:369] acquired machines lock for "force-systemd-env-262000" in 396.084µs
	I0823 11:57:38.110602    3983 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.0 ClusterName:force-systemd-env-262000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:57:38.110976    3983 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:57:38.121816    3983 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0823 11:57:38.168992    3983 start.go:159] libmachine.API.Create for "force-systemd-env-262000" (driver="qemu2")
	I0823 11:57:38.169039    3983 client.go:168] LocalClient.Create starting
	I0823 11:57:38.169173    3983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:57:38.169234    3983 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:38.169260    3983 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:38.169353    3983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:57:38.169389    3983 main.go:141] libmachine: Decoding PEM data...
	I0823 11:57:38.169407    3983 main.go:141] libmachine: Parsing certificate...
	I0823 11:57:38.169943    3983 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:57:38.296735    3983 main.go:141] libmachine: Creating SSH key...
	I0823 11:57:38.548903    3983 main.go:141] libmachine: Creating Disk image...
	I0823 11:57:38.548911    3983 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:57:38.549065    3983 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/disk.qcow2
	I0823 11:57:38.557702    3983 main.go:141] libmachine: STDOUT: 
	I0823 11:57:38.557718    3983 main.go:141] libmachine: STDERR: 
	I0823 11:57:38.557777    3983 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/disk.qcow2 +20000M
	I0823 11:57:38.564970    3983 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:57:38.564982    3983 main.go:141] libmachine: STDERR: 
	I0823 11:57:38.564996    3983 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/disk.qcow2
	I0823 11:57:38.565001    3983 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:57:38.565042    3983 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:d3:e4:3f:ed:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/force-systemd-env-262000/disk.qcow2
	I0823 11:57:38.566543    3983 main.go:141] libmachine: STDOUT: 
	I0823 11:57:38.566557    3983 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:57:38.566569    3983 client.go:171] LocalClient.Create took 397.538875ms
	I0823 11:57:40.568661    3983 start.go:128] duration metric: createHost completed in 2.457737542s
	I0823 11:57:40.568751    3983 start.go:83] releasing machines lock for "force-systemd-env-262000", held for 2.458345459s
	W0823 11:57:40.569097    3983 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:57:40.578801    3983 out.go:177] 
	W0823 11:57:40.582739    3983 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 11:57:40.582764    3983 out.go:239] * 
	* 
	W0823 11:57:40.585494    3983 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 11:57:40.593759    3983 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-262000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-262000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-262000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (74.367459ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-262000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-262000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-08-23 11:57:40.692377 -0700 PDT m=+2778.944318959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-262000 -n force-systemd-env-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-262000 -n force-systemd-env-262000: exit status 7 (34.317083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-262000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-262000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-262000
--- FAIL: TestForceSystemdEnv (10.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (28.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-840000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-840000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-j254w" [558e2112-fcc7-4352-8c25-6ce8c8be8405] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-j254w" [558e2112-fcc7-4352-8c25-6ce8c8be8405] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.010291208s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:32258
functional_test.go:1660: error fetching http://192.168.105.4:32258: Get "http://192.168.105.4:32258": dial tcp 192.168.105.4:32258: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32258: Get "http://192.168.105.4:32258": dial tcp 192.168.105.4:32258: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32258: Get "http://192.168.105.4:32258": dial tcp 192.168.105.4:32258: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32258: Get "http://192.168.105.4:32258": dial tcp 192.168.105.4:32258: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32258: Get "http://192.168.105.4:32258": dial tcp 192.168.105.4:32258: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32258: Get "http://192.168.105.4:32258": dial tcp 192.168.105.4:32258: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32258: Get "http://192.168.105.4:32258": dial tcp 192.168.105.4:32258: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:32258: Get "http://192.168.105.4:32258": dial tcp 192.168.105.4:32258: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-840000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-j254w
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-840000/192.168.105.4
Start Time:       Wed, 23 Aug 2023 11:48:06 -0700
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://3ea5ecff60421627086eedb618769fef0c502cf23cd021ba3f99ef39201fd149
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 23 Aug 2023 11:48:23 -0700
Finished:     Wed, 23 Aug 2023 11:48:23 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 23 Aug 2023 11:48:07 -0700
Finished:     Wed, 23 Aug 2023 11:48:07 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z6j44 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-z6j44:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  27s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-j254w to functional-840000
Normal   Pulled     10s (x3 over 27s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    10s (x3 over 27s)  kubelet            Created container echoserver-arm
Normal   Started    10s (x3 over 27s)  kubelet            Started container echoserver-arm
Warning  BackOff    10s (x3 over 25s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-j254w_default(558e2112-fcc7-4352-8c25-6ce8c8be8405)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-840000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-840000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.126.247
IPs:                      10.102.126.247
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32258/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-840000 -n functional-840000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount   | -p functional-840000                                                                                                 | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2274531702/001:/mount-9p      |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh findmnt                                                                                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh findmnt                                                                                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh -- ls                                                                                          | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh cat                                                                                            | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|         | /mount-9p/test-1692816502663966000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh stat                                                                                           | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh stat                                                                                           | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh sudo                                                                                           | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh findmnt                                                                                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-840000                                                                                                 | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2220720469/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh findmnt                                                                                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh -- ls                                                                                          | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh sudo                                                                                           | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-840000                                                                                                 | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1323226976/001:/mount1   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-840000                                                                                                 | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1323226976/001:/mount3   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-840000                                                                                                 | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1323226976/001:/mount2   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh findmnt                                                                                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh findmnt                                                                                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh findmnt                                                                                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|         | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh findmnt                                                                                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh findmnt                                                                                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|         | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh findmnt                                                                                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh findmnt                                                                                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|         | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh findmnt                                                                                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-840000 ssh findmnt                                                                                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|         | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/23 11:47:02
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0823 11:47:02.822062    2716 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:47:02.822177    2716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:47:02.822179    2716 out.go:309] Setting ErrFile to fd 2...
	I0823 11:47:02.822181    2716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:47:02.822291    2716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:47:02.823288    2716 out.go:303] Setting JSON to false
	I0823 11:47:02.839008    2716 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2796,"bootTime":1692813626,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:47:02.839089    2716 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:47:02.844340    2716 out.go:177] * [functional-840000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:47:02.851255    2716 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:47:02.855310    2716 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:47:02.851304    2716 notify.go:220] Checking for updates...
	I0823 11:47:02.861269    2716 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:47:02.864290    2716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:47:02.867184    2716 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:47:02.874252    2716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:47:02.876206    2716 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:47:02.876260    2716 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:47:02.884317    2716 out.go:177] * Using the qemu2 driver based on existing profile
	I0823 11:47:02.887266    2716 start.go:298] selected driver: qemu2
	I0823 11:47:02.887268    2716 start.go:902] validating driver "qemu2" against &{Name:functional-840000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.0 ClusterName:functional-840000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:47:02.887315    2716 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:47:02.889215    2716 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 11:47:02.889239    2716 cni.go:84] Creating CNI manager for ""
	I0823 11:47:02.889246    2716 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:47:02.889251    2716 start_flags.go:319] config:
	{Name:functional-840000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:functional-840000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:47:02.893027    2716 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:47:02.901314    2716 out.go:177] * Starting control plane node functional-840000 in cluster functional-840000
	I0823 11:47:02.905184    2716 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:47:02.905196    2716 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 11:47:02.905208    2716 cache.go:57] Caching tarball of preloaded images
	I0823 11:47:02.905260    2716 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 11:47:02.905264    2716 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 11:47:02.905319    2716 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/config.json ...
	I0823 11:47:02.905674    2716 start.go:365] acquiring machines lock for functional-840000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:47:02.905701    2716 start.go:369] acquired machines lock for "functional-840000" in 23µs
	I0823 11:47:02.905708    2716 start.go:96] Skipping create...Using existing machine configuration
	I0823 11:47:02.905710    2716 fix.go:54] fixHost starting: 
	I0823 11:47:02.906311    2716 fix.go:102] recreateIfNeeded on functional-840000: state=Running err=<nil>
	W0823 11:47:02.906320    2716 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 11:47:02.911336    2716 out.go:177] * Updating the running qemu2 "functional-840000" VM ...
	I0823 11:47:02.919254    2716 machine.go:88] provisioning docker machine ...
	I0823 11:47:02.919261    2716 buildroot.go:166] provisioning hostname "functional-840000"
	I0823 11:47:02.919288    2716 main.go:141] libmachine: Using SSH client type: native
	I0823 11:47:02.919524    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10093a1e0] 0x10093cc40 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0823 11:47:02.919529    2716 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-840000 && echo "functional-840000" | sudo tee /etc/hostname
	I0823 11:47:02.988946    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-840000
	
	I0823 11:47:02.988987    2716 main.go:141] libmachine: Using SSH client type: native
	I0823 11:47:02.989223    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10093a1e0] 0x10093cc40 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0823 11:47:02.989230    2716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-840000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-840000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-840000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0823 11:47:03.055518    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0823 11:47:03.055525    2716 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17086-1133/.minikube CaCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17086-1133/.minikube}
	I0823 11:47:03.055532    2716 buildroot.go:174] setting up certificates
	I0823 11:47:03.055539    2716 provision.go:83] configureAuth start
	I0823 11:47:03.055542    2716 provision.go:138] copyHostCerts
	I0823 11:47:03.055612    2716 exec_runner.go:144] found /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.pem, removing ...
	I0823 11:47:03.055617    2716 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.pem
	I0823 11:47:03.055715    2716 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.pem (1078 bytes)
	I0823 11:47:03.055895    2716 exec_runner.go:144] found /Users/jenkins/minikube-integration/17086-1133/.minikube/cert.pem, removing ...
	I0823 11:47:03.055896    2716 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17086-1133/.minikube/cert.pem
	I0823 11:47:03.055971    2716 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/cert.pem (1123 bytes)
	I0823 11:47:03.056071    2716 exec_runner.go:144] found /Users/jenkins/minikube-integration/17086-1133/.minikube/key.pem, removing ...
	I0823 11:47:03.056072    2716 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17086-1133/.minikube/key.pem
	I0823 11:47:03.056167    2716 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/key.pem (1679 bytes)
	I0823 11:47:03.056276    2716 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem org=jenkins.functional-840000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-840000]
	I0823 11:47:03.205491    2716 provision.go:172] copyRemoteCerts
	I0823 11:47:03.205530    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0823 11:47:03.205538    2716 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/functional-840000/id_rsa Username:docker}
	I0823 11:47:03.241297    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0823 11:47:03.248340    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0823 11:47:03.254882    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0823 11:47:03.261779    2716 provision.go:86] duration metric: configureAuth took 206.238542ms
	I0823 11:47:03.261784    2716 buildroot.go:189] setting minikube options for container-runtime
	I0823 11:47:03.261894    2716 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:47:03.261922    2716 main.go:141] libmachine: Using SSH client type: native
	I0823 11:47:03.262142    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10093a1e0] 0x10093cc40 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0823 11:47:03.262145    2716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0823 11:47:03.327888    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0823 11:47:03.327893    2716 buildroot.go:70] root file system type: tmpfs
	I0823 11:47:03.327946    2716 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0823 11:47:03.327988    2716 main.go:141] libmachine: Using SSH client type: native
	I0823 11:47:03.328233    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10093a1e0] 0x10093cc40 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0823 11:47:03.328265    2716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0823 11:47:03.399867    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0823 11:47:03.399928    2716 main.go:141] libmachine: Using SSH client type: native
	I0823 11:47:03.400160    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10093a1e0] 0x10093cc40 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0823 11:47:03.400169    2716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0823 11:47:03.467768    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0823 11:47:03.467774    2716 machine.go:91] provisioned docker machine in 548.536166ms
	I0823 11:47:03.467778    2716 start.go:300] post-start starting for "functional-840000" (driver="qemu2")
	I0823 11:47:03.467783    2716 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0823 11:47:03.467840    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0823 11:47:03.467847    2716 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/functional-840000/id_rsa Username:docker}
	I0823 11:47:03.505161    2716 ssh_runner.go:195] Run: cat /etc/os-release
	I0823 11:47:03.506636    2716 info.go:137] Remote host: Buildroot 2021.02.12
	I0823 11:47:03.506644    2716 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/addons for local assets ...
	I0823 11:47:03.506706    2716 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/files for local assets ...
	I0823 11:47:03.506817    2716 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0823 11:47:03.506919    2716 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/test/nested/copy/1554/hosts -> hosts in /etc/test/nested/copy/1554
	I0823 11:47:03.506960    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1554
	I0823 11:47:03.510054    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0823 11:47:03.517543    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/test/nested/copy/1554/hosts --> /etc/test/nested/copy/1554/hosts (40 bytes)
	I0823 11:47:03.526070    2716 start.go:303] post-start completed in 58.288458ms
	I0823 11:47:03.526075    2716 fix.go:56] fixHost completed within 620.387083ms
	I0823 11:47:03.526113    2716 main.go:141] libmachine: Using SSH client type: native
	I0823 11:47:03.526357    2716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10093a1e0] 0x10093cc40 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0823 11:47:03.526360    2716 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0823 11:47:03.593657    2716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692816423.679919362
	
	I0823 11:47:03.593663    2716 fix.go:206] guest clock: 1692816423.679919362
	I0823 11:47:03.593668    2716 fix.go:219] Guest: 2023-08-23 11:47:03.679919362 -0700 PDT Remote: 2023-08-23 11:47:03.526076 -0700 PDT m=+0.723538668 (delta=153.843362ms)
	I0823 11:47:03.593678    2716 fix.go:190] guest clock delta is within tolerance: 153.843362ms
	I0823 11:47:03.593680    2716 start.go:83] releasing machines lock for "functional-840000", held for 688.001ms
	I0823 11:47:03.594042    2716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0823 11:47:03.594042    2716 ssh_runner.go:195] Run: cat /version.json
	I0823 11:47:03.594048    2716 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/functional-840000/id_rsa Username:docker}
	I0823 11:47:03.594060    2716 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/functional-840000/id_rsa Username:docker}
	I0823 11:47:03.629534    2716 ssh_runner.go:195] Run: systemctl --version
	I0823 11:47:03.631654    2716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0823 11:47:03.671409    2716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0823 11:47:03.671437    2716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0823 11:47:03.674194    2716 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0823 11:47:03.674197    2716 start.go:466] detecting cgroup driver to use...
	I0823 11:47:03.674260    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:47:03.679866    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0823 11:47:03.682888    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0823 11:47:03.686251    2716 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0823 11:47:03.686274    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0823 11:47:03.689260    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:47:03.692244    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0823 11:47:03.695340    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:47:03.698925    2716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0823 11:47:03.702710    2716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0823 11:47:03.706027    2716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0823 11:47:03.708624    2716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0823 11:47:03.711542    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:47:03.807452    2716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0823 11:47:03.813892    2716 start.go:466] detecting cgroup driver to use...
	I0823 11:47:03.813932    2716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0823 11:47:03.821555    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:47:03.825969    2716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0823 11:47:03.833979    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:47:03.838369    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 11:47:03.842966    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:47:03.847967    2716 ssh_runner.go:195] Run: which cri-dockerd
	I0823 11:47:03.849584    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0823 11:47:03.852175    2716 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0823 11:47:03.857445    2716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0823 11:47:03.957262    2716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0823 11:47:04.054001    2716 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0823 11:47:04.054009    2716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0823 11:47:04.060242    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:47:04.155581    2716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0823 11:47:15.486693    2716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.33149925s)
	I0823 11:47:15.486756    2716 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0823 11:47:15.563529    2716 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0823 11:47:15.648495    2716 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0823 11:47:15.730199    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:47:15.812002    2716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0823 11:47:15.820093    2716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:47:15.921072    2716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0823 11:47:15.947751    2716 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0823 11:47:15.947813    2716 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0823 11:47:15.950049    2716 start.go:534] Will wait 60s for crictl version
	I0823 11:47:15.950087    2716 ssh_runner.go:195] Run: which crictl
	I0823 11:47:15.951446    2716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0823 11:47:15.964061    2716 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0823 11:47:15.964131    2716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:47:15.971999    2716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:47:15.988840    2716 out.go:204] * Preparing Kubernetes v1.28.0 on Docker 24.0.4 ...
	I0823 11:47:15.988916    2716 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0823 11:47:15.993854    2716 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0823 11:47:15.996946    2716 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:47:15.996992    2716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:47:16.002788    2716 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-840000
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0823 11:47:16.002796    2716 docker.go:566] Images already preloaded, skipping extraction
	I0823 11:47:16.002839    2716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:47:16.008534    2716 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-840000
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0823 11:47:16.008538    2716 cache_images.go:84] Images are preloaded, skipping loading
	I0823 11:47:16.008574    2716 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0823 11:47:16.015630    2716 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0823 11:47:16.015646    2716 cni.go:84] Creating CNI manager for ""
	I0823 11:47:16.015651    2716 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:47:16.015655    2716 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0823 11:47:16.015663    2716 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-840000 NodeName:functional-840000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0823 11:47:16.015720    2716 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-840000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0823 11:47:16.015755    2716 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-840000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:functional-840000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0823 11:47:16.015795    2716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0823 11:47:16.019087    2716 binaries.go:44] Found k8s binaries, skipping transfer
	I0823 11:47:16.019114    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0823 11:47:16.022306    2716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0823 11:47:16.027725    2716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0823 11:47:16.032807    2716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1953 bytes)
	I0823 11:47:16.037767    2716 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0823 11:47:16.039063    2716 certs.go:56] Setting up /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000 for IP: 192.168.105.4
	I0823 11:47:16.039069    2716 certs.go:190] acquiring lock for shared ca certs: {Name:mk376b7368cd0c31aa9f4550a2157cb5219e493f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:47:16.039205    2716 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key
	I0823 11:47:16.039241    2716 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key
	I0823 11:47:16.039306    2716 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.key
	I0823 11:47:16.039345    2716 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/apiserver.key.942c473b
	I0823 11:47:16.039389    2716 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/proxy-client.key
	I0823 11:47:16.039528    2716 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/1554.pem (1338 bytes)
	W0823 11:47:16.039559    2716 certs.go:433] ignoring /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/1554_empty.pem, impossibly tiny 0 bytes
	I0823 11:47:16.039565    2716 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem (1675 bytes)
	I0823 11:47:16.039585    2716 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem (1078 bytes)
	I0823 11:47:16.039611    2716 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem (1123 bytes)
	I0823 11:47:16.039627    2716 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem (1679 bytes)
	I0823 11:47:16.039673    2716 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/15542.pem (1708 bytes)
	I0823 11:47:16.040002    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0823 11:47:16.047179    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0823 11:47:16.054512    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0823 11:47:16.061035    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0823 11:47:16.073304    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0823 11:47:16.083351    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0823 11:47:16.095470    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0823 11:47:16.104123    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0823 11:47:16.111087    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/1554.pem --> /usr/share/ca-certificates/1554.pem (1338 bytes)
	I0823 11:47:16.118621    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/15542.pem --> /usr/share/ca-certificates/15542.pem (1708 bytes)
	I0823 11:47:16.125321    2716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0823 11:47:16.132209    2716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0823 11:47:16.137072    2716 ssh_runner.go:195] Run: openssl version
	I0823 11:47:16.138881    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15542.pem && ln -fs /usr/share/ca-certificates/15542.pem /etc/ssl/certs/15542.pem"
	I0823 11:47:16.142016    2716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15542.pem
	I0823 11:47:16.143381    2716 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 23 18:45 /usr/share/ca-certificates/15542.pem
	I0823 11:47:16.143402    2716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15542.pem
	I0823 11:47:16.145168    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15542.pem /etc/ssl/certs/3ec20f2e.0"
	I0823 11:47:16.147682    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0823 11:47:16.151201    2716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:47:16.152668    2716 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 23 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:47:16.152687    2716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:47:16.154454    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0823 11:47:16.157073    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554.pem && ln -fs /usr/share/ca-certificates/1554.pem /etc/ssl/certs/1554.pem"
	I0823 11:47:16.159896    2716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1554.pem
	I0823 11:47:16.161336    2716 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 23 18:45 /usr/share/ca-certificates/1554.pem
	I0823 11:47:16.161354    2716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554.pem
	I0823 11:47:16.163101    2716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554.pem /etc/ssl/certs/51391683.0"
	I0823 11:47:16.166427    2716 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0823 11:47:16.167845    2716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0823 11:47:16.169720    2716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0823 11:47:16.171422    2716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0823 11:47:16.173149    2716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0823 11:47:16.174856    2716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0823 11:47:16.176748    2716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0823 11:47:16.178423    2716 kubeadm.go:404] StartCluster: {Name:functional-840000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 Cluste
rName:functional-840000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:47:16.178484    2716 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0823 11:47:16.184105    2716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0823 11:47:16.187419    2716 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0823 11:47:16.187429    2716 kubeadm.go:636] restartCluster start
	I0823 11:47:16.187454    2716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0823 11:47:16.190509    2716 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0823 11:47:16.190802    2716 kubeconfig.go:92] found "functional-840000" server: "https://192.168.105.4:8441"
	I0823 11:47:16.191511    2716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0823 11:47:16.194896    2716 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0823 11:47:16.194899    2716 kubeadm.go:1128] stopping kube-system containers ...
	I0823 11:47:16.194935    2716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0823 11:47:16.201714    2716 docker.go:462] Stopping containers: [0ad4dacac9b7 21f58cfa080c 2b223e71233b 0bf5b216836b a61bd0bab263 bf1b3087aafc c2e054a69497 a3d8da797f8b 50d592deaae8 2a6ff9c3d580 be38ed964f28 80a40848eb65 2286f0747aad 74e21eb1d16f 04cc9941d78b 5c91af88eebc 7b503133b699 a9a43177d39f 3ec8a6f2f63f a8939344ede9 38a1d6b4cc4e 1a018fc3c6c0 8fabd1967acc 396ca106781f f2afff0ea5c8 ae14a1bba536 a9eceee4e950]
	I0823 11:47:16.201772    2716 ssh_runner.go:195] Run: docker stop 0ad4dacac9b7 21f58cfa080c 2b223e71233b 0bf5b216836b a61bd0bab263 bf1b3087aafc c2e054a69497 a3d8da797f8b 50d592deaae8 2a6ff9c3d580 be38ed964f28 80a40848eb65 2286f0747aad 74e21eb1d16f 04cc9941d78b 5c91af88eebc 7b503133b699 a9a43177d39f 3ec8a6f2f63f a8939344ede9 38a1d6b4cc4e 1a018fc3c6c0 8fabd1967acc 396ca106781f f2afff0ea5c8 ae14a1bba536 a9eceee4e950
	I0823 11:47:16.208684    2716 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0823 11:47:16.296398    2716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0823 11:47:16.300349    2716 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Aug 23 18:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Aug 23 18:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug 23 18:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 23 18:45 /etc/kubernetes/scheduler.conf
	
	I0823 11:47:16.300381    2716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0823 11:47:16.303873    2716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0823 11:47:16.307396    2716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0823 11:47:16.310911    2716 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0823 11:47:16.310932    2716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0823 11:47:16.314348    2716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0823 11:47:16.317207    2716 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0823 11:47:16.317232    2716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0823 11:47:16.319869    2716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0823 11:47:16.323139    2716 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0823 11:47:16.323141    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0823 11:47:16.343890    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0823 11:47:16.845299    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0823 11:47:16.951571    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0823 11:47:16.981605    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0823 11:47:17.007271    2716 api_server.go:52] waiting for apiserver process to appear ...
	I0823 11:47:17.007327    2716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 11:47:17.013289    2716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 11:47:17.518977    2716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 11:47:18.018975    2716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 11:47:18.023218    2716 api_server.go:72] duration metric: took 1.015984958s to wait for apiserver process to appear ...
	I0823 11:47:18.023223    2716 api_server.go:88] waiting for apiserver healthz status ...
	I0823 11:47:18.023235    2716 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0823 11:47:20.222865    2716 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0823 11:47:20.222873    2716 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0823 11:47:20.222878    2716 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0823 11:47:20.271995    2716 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0823 11:47:20.272006    2716 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0823 11:47:20.772459    2716 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0823 11:47:20.776193    2716 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0823 11:47:20.776202    2716 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0823 11:47:21.274016    2716 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0823 11:47:21.277713    2716 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0823 11:47:21.282814    2716 api_server.go:141] control plane version: v1.28.0
	I0823 11:47:21.282819    2716 api_server.go:131] duration metric: took 3.2597095s to wait for apiserver health ...
	I0823 11:47:21.282823    2716 cni.go:84] Creating CNI manager for ""
	I0823 11:47:21.282829    2716 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:47:21.286959    2716 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0823 11:47:21.290951    2716 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0823 11:47:21.294048    2716 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0823 11:47:21.298854    2716 system_pods.go:43] waiting for kube-system pods to appear ...
	I0823 11:47:21.305249    2716 system_pods.go:59] 7 kube-system pods found
	I0823 11:47:21.305259    2716 system_pods.go:61] "coredns-5dd5756b68-hgbgz" [70592ab3-622f-4509-9cfa-a52a0177de15] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0823 11:47:21.305262    2716 system_pods.go:61] "etcd-functional-840000" [c4135721-5b37-4417-81e7-e292ea7adb5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0823 11:47:21.305267    2716 system_pods.go:61] "kube-apiserver-functional-840000" [ff4f4c35-a2b0-46ae-a115-f07d5f4379d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0823 11:47:21.305269    2716 system_pods.go:61] "kube-controller-manager-functional-840000" [b9649237-9d8a-4895-9061-7414b542586d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0823 11:47:21.305272    2716 system_pods.go:61] "kube-proxy-2qcbn" [95d4579e-9e62-453c-b0e8-0281147a78d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0823 11:47:21.305275    2716 system_pods.go:61] "kube-scheduler-functional-840000" [d6bdc96e-e2cd-425b-ab16-9eaa6fd6fb17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0823 11:47:21.305277    2716 system_pods.go:61] "storage-provisioner" [2195357e-5966-4b7a-bf10-519f98066ad5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0823 11:47:21.305279    2716 system_pods.go:74] duration metric: took 6.422667ms to wait for pod list to return data ...
	I0823 11:47:21.305282    2716 node_conditions.go:102] verifying NodePressure condition ...
	I0823 11:47:21.308068    2716 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0823 11:47:21.308078    2716 node_conditions.go:123] node cpu capacity is 2
	I0823 11:47:21.308084    2716 node_conditions.go:105] duration metric: took 2.799917ms to run NodePressure ...
	I0823 11:47:21.308092    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0823 11:47:21.412325    2716 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0823 11:47:21.415167    2716 kubeadm.go:787] kubelet initialised
	I0823 11:47:21.415172    2716 kubeadm.go:788] duration metric: took 2.840709ms waiting for restarted kubelet to initialise ...
	I0823 11:47:21.415176    2716 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:47:21.418492    2716 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hgbgz" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:23.429170    2716 pod_ready.go:102] pod "coredns-5dd5756b68-hgbgz" in "kube-system" namespace has status "Ready":"False"
	I0823 11:47:25.929232    2716 pod_ready.go:102] pod "coredns-5dd5756b68-hgbgz" in "kube-system" namespace has status "Ready":"False"
	I0823 11:47:28.428451    2716 pod_ready.go:102] pod "coredns-5dd5756b68-hgbgz" in "kube-system" namespace has status "Ready":"False"
	I0823 11:47:30.427988    2716 pod_ready.go:92] pod "coredns-5dd5756b68-hgbgz" in "kube-system" namespace has status "Ready":"True"
	I0823 11:47:30.427993    2716 pod_ready.go:81] duration metric: took 9.009812833s waiting for pod "coredns-5dd5756b68-hgbgz" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:30.427997    2716 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:30.430401    2716 pod_ready.go:92] pod "etcd-functional-840000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:47:30.430403    2716 pod_ready.go:81] duration metric: took 2.4045ms waiting for pod "etcd-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:30.430406    2716 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:32.440215    2716 pod_ready.go:102] pod "kube-apiserver-functional-840000" in "kube-system" namespace has status "Ready":"False"
	I0823 11:47:32.939503    2716 pod_ready.go:92] pod "kube-apiserver-functional-840000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:47:32.939507    2716 pod_ready.go:81] duration metric: took 2.509186958s waiting for pod "kube-apiserver-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:32.939511    2716 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:32.942006    2716 pod_ready.go:92] pod "kube-controller-manager-functional-840000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:47:32.942009    2716 pod_ready.go:81] duration metric: took 2.495791ms waiting for pod "kube-controller-manager-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:32.942013    2716 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2qcbn" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:32.944203    2716 pod_ready.go:92] pod "kube-proxy-2qcbn" in "kube-system" namespace has status "Ready":"True"
	I0823 11:47:32.944205    2716 pod_ready.go:81] duration metric: took 2.1905ms waiting for pod "kube-proxy-2qcbn" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:32.944208    2716 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:35.132228    2716 pod_ready.go:92] pod "kube-scheduler-functional-840000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:47:35.132233    2716 pod_ready.go:81] duration metric: took 2.188100333s waiting for pod "kube-scheduler-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:35.132237    2716 pod_ready.go:38] duration metric: took 13.717540792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:47:35.132246    2716 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0823 11:47:35.136005    2716 ops.go:34] apiserver oom_adj: -16
	I0823 11:47:35.136009    2716 kubeadm.go:640] restartCluster took 18.949244875s
	I0823 11:47:35.136011    2716 kubeadm.go:406] StartCluster complete in 18.958258791s
	I0823 11:47:35.136018    2716 settings.go:142] acquiring lock: {Name:mk727aa2e008442bafb3bca409aa11c730e46bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:47:35.136106    2716 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:47:35.136611    2716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/kubeconfig: {Name:mk62eedfc40b21b153318958a231856d07330e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:47:35.136878    2716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0823 11:47:35.137097    2716 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:47:35.137081    2716 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0823 11:47:35.137258    2716 addons.go:69] Setting storage-provisioner=true in profile "functional-840000"
	I0823 11:47:35.137250    2716 addons.go:69] Setting default-storageclass=true in profile "functional-840000"
	I0823 11:47:35.137274    2716 addons.go:231] Setting addon storage-provisioner=true in "functional-840000"
	I0823 11:47:35.137282    2716 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-840000"
	W0823 11:47:35.137287    2716 addons.go:240] addon storage-provisioner should already be in state true
	I0823 11:47:35.137381    2716 host.go:66] Checking if "functional-840000" exists ...
	I0823 11:47:35.141960    2716 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 11:47:35.145900    2716 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0823 11:47:35.145904    2716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0823 11:47:35.145911    2716 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/functional-840000/id_rsa Username:docker}
	I0823 11:47:35.146345    2716 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-840000" context rescaled to 1 replicas
	I0823 11:47:35.146355    2716 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:47:35.149808    2716 out.go:177] * Verifying Kubernetes components...
	I0823 11:47:35.148614    2716 addons.go:231] Setting addon default-storageclass=true in "functional-840000"
	W0823 11:47:35.157823    2716 addons.go:240] addon default-storageclass should already be in state true
	I0823 11:47:35.157836    2716 host.go:66] Checking if "functional-840000" exists ...
	I0823 11:47:35.157876    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 11:47:35.158522    2716 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0823 11:47:35.158525    2716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0823 11:47:35.158529    2716 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/functional-840000/id_rsa Username:docker}
	I0823 11:47:35.182714    2716 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0823 11:47:35.182711    2716 node_ready.go:35] waiting up to 6m0s for node "functional-840000" to be "Ready" ...
	I0823 11:47:35.183937    2716 node_ready.go:49] node "functional-840000" has status "Ready":"True"
	I0823 11:47:35.183939    2716 node_ready.go:38] duration metric: took 1.221208ms waiting for node "functional-840000" to be "Ready" ...
	I0823 11:47:35.183941    2716 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:47:35.194110    2716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0823 11:47:35.199781    2716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0823 11:47:35.229381    2716 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hgbgz" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:35.568387    2716 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0823 11:47:35.572403    2716 addons.go:502] enable addons completed in 435.365625ms: enabled=[storage-provisioner default-storageclass]
	I0823 11:47:35.628383    2716 pod_ready.go:92] pod "coredns-5dd5756b68-hgbgz" in "kube-system" namespace has status "Ready":"True"
	I0823 11:47:35.628386    2716 pod_ready.go:81] duration metric: took 399.012583ms waiting for pod "coredns-5dd5756b68-hgbgz" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:35.628390    2716 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:36.028465    2716 pod_ready.go:92] pod "etcd-functional-840000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:47:36.028469    2716 pod_ready.go:81] duration metric: took 400.090792ms waiting for pod "etcd-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:36.028473    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:36.428722    2716 pod_ready.go:92] pod "kube-apiserver-functional-840000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:47:36.428728    2716 pod_ready.go:81] duration metric: took 400.266875ms waiting for pod "kube-apiserver-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:36.428733    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:36.829221    2716 pod_ready.go:92] pod "kube-controller-manager-functional-840000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:47:36.829226    2716 pod_ready.go:81] duration metric: took 400.503958ms waiting for pod "kube-controller-manager-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:36.829231    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2qcbn" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:37.228680    2716 pod_ready.go:92] pod "kube-proxy-2qcbn" in "kube-system" namespace has status "Ready":"True"
	I0823 11:47:37.228684    2716 pod_ready.go:81] duration metric: took 399.465583ms waiting for pod "kube-proxy-2qcbn" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:37.228688    2716 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:37.628189    2716 pod_ready.go:92] pod "kube-scheduler-functional-840000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:47:37.628192    2716 pod_ready.go:81] duration metric: took 399.516416ms waiting for pod "kube-scheduler-functional-840000" in "kube-system" namespace to be "Ready" ...
	I0823 11:47:37.628196    2716 pod_ready.go:38] duration metric: took 2.44433675s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:47:37.628210    2716 api_server.go:52] waiting for apiserver process to appear ...
	I0823 11:47:37.628316    2716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 11:47:37.633125    2716 api_server.go:72] duration metric: took 2.486849083s to wait for apiserver process to appear ...
	I0823 11:47:37.633128    2716 api_server.go:88] waiting for apiserver healthz status ...
	I0823 11:47:37.633135    2716 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0823 11:47:37.636826    2716 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0823 11:47:37.637419    2716 api_server.go:141] control plane version: v1.28.0
	I0823 11:47:37.637423    2716 api_server.go:131] duration metric: took 4.292375ms to wait for apiserver health ...
	I0823 11:47:37.637425    2716 system_pods.go:43] waiting for kube-system pods to appear ...
	I0823 11:47:37.828550    2716 system_pods.go:59] 7 kube-system pods found
	I0823 11:47:37.828555    2716 system_pods.go:61] "coredns-5dd5756b68-hgbgz" [70592ab3-622f-4509-9cfa-a52a0177de15] Running
	I0823 11:47:37.828557    2716 system_pods.go:61] "etcd-functional-840000" [c4135721-5b37-4417-81e7-e292ea7adb5b] Running
	I0823 11:47:37.828560    2716 system_pods.go:61] "kube-apiserver-functional-840000" [ff4f4c35-a2b0-46ae-a115-f07d5f4379d8] Running
	I0823 11:47:37.828562    2716 system_pods.go:61] "kube-controller-manager-functional-840000" [b9649237-9d8a-4895-9061-7414b542586d] Running
	I0823 11:47:37.828565    2716 system_pods.go:61] "kube-proxy-2qcbn" [95d4579e-9e62-453c-b0e8-0281147a78d8] Running
	I0823 11:47:37.828566    2716 system_pods.go:61] "kube-scheduler-functional-840000" [d6bdc96e-e2cd-425b-ab16-9eaa6fd6fb17] Running
	I0823 11:47:37.828568    2716 system_pods.go:61] "storage-provisioner" [2195357e-5966-4b7a-bf10-519f98066ad5] Running
	I0823 11:47:37.828570    2716 system_pods.go:74] duration metric: took 191.150042ms to wait for pod list to return data ...
	I0823 11:47:37.828572    2716 default_sa.go:34] waiting for default service account to be created ...
	I0823 11:47:38.028285    2716 default_sa.go:45] found service account: "default"
	I0823 11:47:38.028289    2716 default_sa.go:55] duration metric: took 199.721875ms for default service account to be created ...
	I0823 11:47:38.028291    2716 system_pods.go:116] waiting for k8s-apps to be running ...
	I0823 11:47:38.230051    2716 system_pods.go:86] 7 kube-system pods found
	I0823 11:47:38.230056    2716 system_pods.go:89] "coredns-5dd5756b68-hgbgz" [70592ab3-622f-4509-9cfa-a52a0177de15] Running
	I0823 11:47:38.230058    2716 system_pods.go:89] "etcd-functional-840000" [c4135721-5b37-4417-81e7-e292ea7adb5b] Running
	I0823 11:47:38.230060    2716 system_pods.go:89] "kube-apiserver-functional-840000" [ff4f4c35-a2b0-46ae-a115-f07d5f4379d8] Running
	I0823 11:47:38.230062    2716 system_pods.go:89] "kube-controller-manager-functional-840000" [b9649237-9d8a-4895-9061-7414b542586d] Running
	I0823 11:47:38.230064    2716 system_pods.go:89] "kube-proxy-2qcbn" [95d4579e-9e62-453c-b0e8-0281147a78d8] Running
	I0823 11:47:38.230066    2716 system_pods.go:89] "kube-scheduler-functional-840000" [d6bdc96e-e2cd-425b-ab16-9eaa6fd6fb17] Running
	I0823 11:47:38.230067    2716 system_pods.go:89] "storage-provisioner" [2195357e-5966-4b7a-bf10-519f98066ad5] Running
	I0823 11:47:38.230070    2716 system_pods.go:126] duration metric: took 201.783583ms to wait for k8s-apps to be running ...
	I0823 11:47:38.230071    2716 system_svc.go:44] waiting for kubelet service to be running ....
	I0823 11:47:38.230127    2716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 11:47:38.235018    2716 system_svc.go:56] duration metric: took 4.945541ms WaitForService to wait for kubelet.
	I0823 11:47:38.235022    2716 kubeadm.go:581] duration metric: took 3.088768333s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0823 11:47:38.235031    2716 node_conditions.go:102] verifying NodePressure condition ...
	I0823 11:47:38.428702    2716 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0823 11:47:38.428709    2716 node_conditions.go:123] node cpu capacity is 2
	I0823 11:47:38.428716    2716 node_conditions.go:105] duration metric: took 193.689834ms to run NodePressure ...
	I0823 11:47:38.428721    2716 start.go:228] waiting for startup goroutines ...
	I0823 11:47:38.428723    2716 start.go:233] waiting for cluster config update ...
	I0823 11:47:38.428728    2716 start.go:242] writing updated cluster config ...
	I0823 11:47:38.429134    2716 ssh_runner.go:195] Run: rm -f paused
	I0823 11:47:38.457108    2716 start.go:600] kubectl: 1.27.2, cluster: 1.28.0 (minor skew: 1)
	I0823 11:47:38.461051    2716 out.go:177] * Done! kubectl is now configured to use "functional-840000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-08-23 18:45:46 UTC, ends at Wed 2023-08-23 18:48:34 UTC. --
	Aug 23 18:48:23 functional-840000 dockerd[6618]: time="2023-08-23T18:48:23.156872908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:48:23 functional-840000 dockerd[6618]: time="2023-08-23T18:48:23.156878908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:48:23 functional-840000 dockerd[6612]: time="2023-08-23T18:48:23.190390474Z" level=info msg="ignoring event" container=3ea5ecff60421627086eedb618769fef0c502cf23cd021ba3f99ef39201fd149 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:48:23 functional-840000 dockerd[6618]: time="2023-08-23T18:48:23.190750548Z" level=info msg="shim disconnected" id=3ea5ecff60421627086eedb618769fef0c502cf23cd021ba3f99ef39201fd149 namespace=moby
	Aug 23 18:48:23 functional-840000 dockerd[6618]: time="2023-08-23T18:48:23.190806296Z" level=warning msg="cleaning up after shim disconnected" id=3ea5ecff60421627086eedb618769fef0c502cf23cd021ba3f99ef39201fd149 namespace=moby
	Aug 23 18:48:23 functional-840000 dockerd[6618]: time="2023-08-23T18:48:23.190815838Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:48:23 functional-840000 dockerd[6618]: time="2023-08-23T18:48:23.884404692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:48:23 functional-840000 dockerd[6618]: time="2023-08-23T18:48:23.884457732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:48:23 functional-840000 dockerd[6618]: time="2023-08-23T18:48:23.884474356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:48:23 functional-840000 dockerd[6618]: time="2023-08-23T18:48:23.884485648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:48:23 functional-840000 cri-dockerd[6877]: time="2023-08-23T18:48:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/014fc4c3525bc5633e40f1347b294cd283f2bace3529b2214e9c771c9ec966b1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 23 18:48:25 functional-840000 cri-dockerd[6877]: time="2023-08-23T18:48:25Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Aug 23 18:48:25 functional-840000 dockerd[6618]: time="2023-08-23T18:48:25.589121319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:48:25 functional-840000 dockerd[6618]: time="2023-08-23T18:48:25.589149360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:48:25 functional-840000 dockerd[6618]: time="2023-08-23T18:48:25.589177859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:48:25 functional-840000 dockerd[6618]: time="2023-08-23T18:48:25.589182734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:48:25 functional-840000 dockerd[6612]: time="2023-08-23T18:48:25.637734470Z" level=info msg="ignoring event" container=2ab7ecb21d7e0f4325dcdf586dc5c751ab6cdceed717e59ba8171224e4efb45b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:48:25 functional-840000 dockerd[6618]: time="2023-08-23T18:48:25.637966506Z" level=info msg="shim disconnected" id=2ab7ecb21d7e0f4325dcdf586dc5c751ab6cdceed717e59ba8171224e4efb45b namespace=moby
	Aug 23 18:48:25 functional-840000 dockerd[6618]: time="2023-08-23T18:48:25.637995255Z" level=warning msg="cleaning up after shim disconnected" id=2ab7ecb21d7e0f4325dcdf586dc5c751ab6cdceed717e59ba8171224e4efb45b namespace=moby
	Aug 23 18:48:25 functional-840000 dockerd[6618]: time="2023-08-23T18:48:25.637999380Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:48:26 functional-840000 dockerd[6618]: time="2023-08-23T18:48:26.931391411Z" level=info msg="shim disconnected" id=014fc4c3525bc5633e40f1347b294cd283f2bace3529b2214e9c771c9ec966b1 namespace=moby
	Aug 23 18:48:26 functional-840000 dockerd[6618]: time="2023-08-23T18:48:26.931423743Z" level=warning msg="cleaning up after shim disconnected" id=014fc4c3525bc5633e40f1347b294cd283f2bace3529b2214e9c771c9ec966b1 namespace=moby
	Aug 23 18:48:26 functional-840000 dockerd[6618]: time="2023-08-23T18:48:26.931428118Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:48:26 functional-840000 dockerd[6612]: time="2023-08-23T18:48:26.931994187Z" level=info msg="ignoring event" container=014fc4c3525bc5633e40f1347b294cd283f2bace3529b2214e9c771c9ec966b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:48:26 functional-840000 dockerd[6618]: time="2023-08-23T18:48:26.936205034Z" level=warning msg="cleanup warnings time=\"2023-08-23T18:48:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	2ab7ecb21d7e0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 seconds ago        Exited              mount-munger              0                   014fc4c3525bc
	3ea5ecff60421       72565bf5bbedf                                                                                         11 seconds ago       Exited              echoserver-arm            2                   57010a56c84df
	fa37e5c40835e       nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c                         18 seconds ago       Running             myfrontend                0                   ba4c395e504a8
	b474473d18537       72565bf5bbedf                                                                                         29 seconds ago       Exited              echoserver-arm            2                   73e1c5c9d358c
	650d12f2298bd       nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385                         34 seconds ago       Running             nginx                     0                   bccb11060ebfd
	47b9b2a21d81e       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   4ccbff0d5ad80
	309dfb63643f8       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   22e165fdb682f
	e213d1c41243c       940f54a5bcae9                                                                                         About a minute ago   Running             kube-proxy                2                   60f0f68ba4e23
	cc3a023040423       9cdd6470f48c8                                                                                         About a minute ago   Running             etcd                      2                   9d434df32525b
	fa4d5839e41c4       762dce4090c5f                                                                                         About a minute ago   Running             kube-scheduler            2                   aaa7c08b86255
	a8dd20130fbd4       00543d2fe5d71                                                                                         About a minute ago   Running             kube-apiserver            0                   ae8e11a1e64e4
	5b22d8a0a7cea       46cc66ccc7c19                                                                                         About a minute ago   Running             kube-controller-manager   2                   a3dbfd3470068
	0ad4dacac9b78       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   2286f0747aadf
	2b223e71233b1       97e04611ad434                                                                                         About a minute ago   Exited              coredns                   1                   50d592deaae86
	0bf5b216836bf       940f54a5bcae9                                                                                         About a minute ago   Exited              kube-proxy                1                   be38ed964f283
	a61bd0bab263e       9cdd6470f48c8                                                                                         About a minute ago   Exited              etcd                      1                   04cc9941d78b3
	bf1b3087aafc4       46cc66ccc7c19                                                                                         About a minute ago   Exited              kube-controller-manager   1                   80a40848eb654
	a3d8da797f8bc       762dce4090c5f                                                                                         About a minute ago   Exited              kube-scheduler            1                   2a6ff9c3d5809
	
	* 
	* ==> coredns [2b223e71233b] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50956 - 12203 "HINFO IN 40268140403951449.3293385418033556628. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.008726044s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [47b9b2a21d81] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55332 - 13083 "HINFO IN 6108838298562393571.2884266714347893895. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008718693s
	[INFO] 10.244.0.1:19714 - 21162 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.00008308s
	[INFO] 10.244.0.1:10421 - 55075 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000086621s
	[INFO] 10.244.0.1:42420 - 3976 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000030082s
	[INFO] 10.244.0.1:56350 - 39486 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001301406s
	[INFO] 10.244.0.1:2113 - 37411 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000047456s
	[INFO] 10.244.0.1:13770 - 14182 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000042249s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-840000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-840000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b0cad40b2c48c073112e0845fd1037a030ab0128
	                    minikube.k8s.io/name=functional-840000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_23T11_46_03_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 23 Aug 2023 18:46:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-840000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 23 Aug 2023 18:48:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 23 Aug 2023 18:48:21 +0000   Wed, 23 Aug 2023 18:46:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 23 Aug 2023 18:48:21 +0000   Wed, 23 Aug 2023 18:46:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 23 Aug 2023 18:48:21 +0000   Wed, 23 Aug 2023 18:46:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 23 Aug 2023 18:48:21 +0000   Wed, 23 Aug 2023 18:46:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-840000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d60fb11c7d04788bb784701813af956
	  System UUID:                8d60fb11c7d04788bb784701813af956
	  Boot ID:                    c50a4c90-9b25-40c3-8aff-cd7a079f03a5
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-gxhsb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  default                     hello-node-connect-7799dfb7c6-j254w          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                 coredns-5dd5756b68-hgbgz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m17s
	  kube-system                 etcd-functional-840000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-apiserver-functional-840000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-functional-840000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-proxy-2qcbn                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 kube-scheduler-functional-840000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m16s                  kube-proxy       
	  Normal  Starting                 72s                    kube-proxy       
	  Normal  Starting                 115s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node functional-840000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node functional-840000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m36s (x7 over 2m36s)  kubelet          Node functional-840000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m31s                  kubelet          Node functional-840000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m31s                  kubelet          Node functional-840000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m31s                  kubelet          Node functional-840000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m31s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m28s                  kubelet          Node functional-840000 status is now: NodeReady
	  Normal  RegisteredNode           2m18s                  node-controller  Node functional-840000 event: Registered Node functional-840000 in Controller
	  Normal  RegisteredNode           103s                   node-controller  Node functional-840000 event: Registered Node functional-840000 in Controller
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)      kubelet          Node functional-840000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)      kubelet          Node functional-840000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 77s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)      kubelet          Node functional-840000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           61s                    node-controller  Node functional-840000 event: Registered Node functional-840000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.151092] systemd-fstab-generator[3700]: Ignoring "noauto" for root device
	[  +0.095989] systemd-fstab-generator[3711]: Ignoring "noauto" for root device
	[  +0.109415] systemd-fstab-generator[3724]: Ignoring "noauto" for root device
	[  +5.056302] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.211898] systemd-fstab-generator[4293]: Ignoring "noauto" for root device
	[  +0.083554] systemd-fstab-generator[4304]: Ignoring "noauto" for root device
	[  +0.082233] systemd-fstab-generator[4315]: Ignoring "noauto" for root device
	[  +0.086846] systemd-fstab-generator[4326]: Ignoring "noauto" for root device
	[  +0.106885] systemd-fstab-generator[4397]: Ignoring "noauto" for root device
	[  +5.352199] kauditd_printk_skb: 36 callbacks suppressed
	[Aug23 18:47] systemd-fstab-generator[6148]: Ignoring "noauto" for root device
	[  +0.152994] systemd-fstab-generator[6180]: Ignoring "noauto" for root device
	[  +0.095863] systemd-fstab-generator[6191]: Ignoring "noauto" for root device
	[  +0.101613] systemd-fstab-generator[6214]: Ignoring "noauto" for root device
	[ +11.424790] systemd-fstab-generator[6765]: Ignoring "noauto" for root device
	[  +0.085022] systemd-fstab-generator[6776]: Ignoring "noauto" for root device
	[  +0.079552] systemd-fstab-generator[6787]: Ignoring "noauto" for root device
	[  +0.082184] systemd-fstab-generator[6798]: Ignoring "noauto" for root device
	[  +0.108345] systemd-fstab-generator[6870]: Ignoring "noauto" for root device
	[  +1.028515] systemd-fstab-generator[7117]: Ignoring "noauto" for root device
	[  +4.636134] kauditd_printk_skb: 29 callbacks suppressed
	[ +24.663275] kauditd_printk_skb: 9 callbacks suppressed
	[  +0.886645] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Aug23 18:48] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.019277] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [a61bd0bab263] <==
	* {"level":"info","ts":"2023-08-23T18:46:36.617267Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-23T18:46:37.713386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-23T18:46:37.713531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-23T18:46:37.713571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-08-23T18:46:37.713648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-08-23T18:46:37.713671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-08-23T18:46:37.713762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-08-23T18:46:37.713779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-08-23T18:46:37.716509Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-840000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-23T18:46:37.716582Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-23T18:46:37.717175Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-23T18:46:37.717373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-23T18:46:37.717556Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-23T18:46:37.720564Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-23T18:46:37.720639Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-08-23T18:47:04.263902Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-08-23T18:47:04.263934Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-840000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-08-23T18:47:04.263982Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-08-23T18:47:04.264026Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-08-23T18:47:04.271648Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-08-23T18:47:04.271676Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2023-08-23T18:47:04.271731Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-08-23T18:47:04.273029Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-23T18:47:04.273056Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-23T18:47:04.273061Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-840000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> etcd [cc3a02304042] <==
	* {"level":"info","ts":"2023-08-23T18:47:17.942263Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-23T18:47:17.94228Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-23T18:47:17.942819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-08-23T18:47:17.942903Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-08-23T18:47:17.943073Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:47:17.943275Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:47:17.948284Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-23T18:47:17.948502Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-23T18:47:17.948365Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-23T18:47:17.948718Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-23T18:47:17.94869Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-23T18:47:19.712345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-08-23T18:47:19.712493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-08-23T18:47:19.712538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-08-23T18:47:19.712572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-08-23T18:47:19.712591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-08-23T18:47:19.712619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-08-23T18:47:19.712648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-08-23T18:47:19.715614Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-840000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-23T18:47:19.715696Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-23T18:47:19.716158Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-23T18:47:19.716282Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-23T18:47:19.716532Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-23T18:47:19.718509Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-08-23T18:47:19.720155Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  18:48:34 up 2 min,  0 users,  load average: 0.47, 0.27, 0.11
	Linux functional-840000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a8dd20130fbd] <==
	* I0823 18:47:20.387862       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0823 18:47:20.387866       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0823 18:47:20.387928       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0823 18:47:20.387971       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0823 18:47:20.388213       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0823 18:47:20.389382       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0823 18:47:20.389462       1 aggregator.go:166] initial CRD sync complete...
	I0823 18:47:20.389469       1 autoregister_controller.go:141] Starting autoregister controller
	I0823 18:47:20.389472       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0823 18:47:20.389474       1 cache.go:39] Caches are synced for autoregister controller
	E0823 18:47:20.390132       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0823 18:47:20.451931       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0823 18:47:21.289401       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0823 18:47:21.460602       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0823 18:47:21.464695       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0823 18:47:21.482296       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0823 18:47:21.491875       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0823 18:47:21.494682       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0823 18:47:32.926579       1 controller.go:624] quota admission added evaluator for: endpoints
	I0823 18:47:33.126689       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0823 18:47:40.052508       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.72.126"}
	I0823 18:47:46.240007       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0823 18:47:46.288000       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.151.94"}
	I0823 18:47:56.709990       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.251.186"}
	I0823 18:48:06.146795       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.126.247"}
	
	* 
	* ==> kube-controller-manager [5b22d8a0a7ce] <==
	* I0823 18:47:33.448631       1 shared_informer.go:318] Caches are synced for garbage collector
	I0823 18:47:33.448665       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0823 18:47:46.242170       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-759d89bdcc to 1"
	I0823 18:47:46.251080       1 event.go:307] "Event occurred" object="default/hello-node-759d89bdcc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-759d89bdcc-gxhsb"
	I0823 18:47:46.259787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="17.515797ms"
	I0823 18:47:46.264896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="5.085161ms"
	I0823 18:47:46.265138       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="26.416µs"
	I0823 18:47:46.270581       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="35.166µs"
	I0823 18:47:51.349667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="24.417µs"
	I0823 18:47:52.361941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="43.708µs"
	I0823 18:47:53.365886       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="37.958µs"
	I0823 18:48:02.957251       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0823 18:48:02.957266       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0823 18:48:05.549620       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="55.748µs"
	I0823 18:48:06.106122       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-7799dfb7c6 to 1"
	I0823 18:48:06.109919       1 event.go:307] "Event occurred" object="default/hello-node-connect-7799dfb7c6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-7799dfb7c6-j254w"
	I0823 18:48:06.113434       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="7.250217ms"
	I0823 18:48:06.119228       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="5.771025ms"
	I0823 18:48:06.119253       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="12.583µs"
	I0823 18:48:06.126146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="15.332µs"
	I0823 18:48:07.762678       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="22.499µs"
	I0823 18:48:08.773003       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="25.666µs"
	I0823 18:48:09.780991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="30.915µs"
	I0823 18:48:20.122443       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="26.374µs"
	I0823 18:48:23.857166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="24.082µs"
	
	* 
	* ==> kube-controller-manager [bf1b3087aafc] <==
	* I0823 18:46:51.048345       1 shared_informer.go:318] Caches are synced for TTL
	I0823 18:46:51.050511       1 shared_informer.go:318] Caches are synced for deployment
	I0823 18:46:51.051640       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0823 18:46:51.051654       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0823 18:46:51.052773       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0823 18:46:51.052778       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0823 18:46:51.053956       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0823 18:46:51.054037       1 shared_informer.go:318] Caches are synced for taint
	I0823 18:46:51.054077       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0823 18:46:51.054114       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-840000"
	I0823 18:46:51.054132       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0823 18:46:51.054078       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0823 18:46:51.054179       1 taint_manager.go:211] "Sending events to api server"
	I0823 18:46:51.054250       1 event.go:307] "Event occurred" object="functional-840000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-840000 event: Registered Node functional-840000 in Controller"
	I0823 18:46:51.055206       1 shared_informer.go:318] Caches are synced for crt configmap
	I0823 18:46:51.082585       1 shared_informer.go:318] Caches are synced for disruption
	I0823 18:46:51.139346       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0823 18:46:51.144543       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0823 18:46:51.150612       1 shared_informer.go:318] Caches are synced for resource quota
	I0823 18:46:51.163068       1 shared_informer.go:318] Caches are synced for endpoint
	I0823 18:46:51.168803       1 shared_informer.go:318] Caches are synced for resource quota
	I0823 18:46:51.233340       1 shared_informer.go:318] Caches are synced for persistent volume
	I0823 18:46:51.557429       1 shared_informer.go:318] Caches are synced for garbage collector
	I0823 18:46:51.557450       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0823 18:46:51.570574       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [0bf5b216836b] <==
	* I0823 18:46:37.074321       1 server_others.go:69] "Using iptables proxy"
	I0823 18:46:38.367053       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0823 18:46:38.378696       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0823 18:46:38.378708       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0823 18:46:38.379947       1 server_others.go:152] "Using iptables Proxier"
	I0823 18:46:38.379966       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0823 18:46:38.380024       1 server.go:846] "Version info" version="v1.28.0"
	I0823 18:46:38.380028       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0823 18:46:38.381004       1 config.go:188] "Starting service config controller"
	I0823 18:46:38.381009       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0823 18:46:38.381016       1 config.go:97] "Starting endpoint slice config controller"
	I0823 18:46:38.381017       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0823 18:46:38.381134       1 config.go:315] "Starting node config controller"
	I0823 18:46:38.381136       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0823 18:46:38.481799       1 shared_informer.go:318] Caches are synced for node config
	I0823 18:46:38.481808       1 shared_informer.go:318] Caches are synced for service config
	I0823 18:46:38.481857       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [e213d1c41243] <==
	* I0823 18:47:21.696505       1 server_others.go:69] "Using iptables proxy"
	I0823 18:47:21.701530       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0823 18:47:21.723116       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0823 18:47:21.723139       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0823 18:47:21.724163       1 server_others.go:152] "Using iptables Proxier"
	I0823 18:47:21.724179       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0823 18:47:21.724252       1 server.go:846] "Version info" version="v1.28.0"
	I0823 18:47:21.724256       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0823 18:47:21.726092       1 config.go:188] "Starting service config controller"
	I0823 18:47:21.726102       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0823 18:47:21.726110       1 config.go:97] "Starting endpoint slice config controller"
	I0823 18:47:21.726111       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0823 18:47:21.728009       1 config.go:315] "Starting node config controller"
	I0823 18:47:21.728015       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0823 18:47:21.826490       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0823 18:47:21.826530       1 shared_informer.go:318] Caches are synced for service config
	I0823 18:47:21.828212       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [a3d8da797f8b] <==
	* I0823 18:46:36.749454       1 serving.go:348] Generated self-signed cert in-memory
	W0823 18:46:38.317325       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0823 18:46:38.317388       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0823 18:46:38.317408       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0823 18:46:38.317440       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0823 18:46:38.340378       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0823 18:46:38.340635       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0823 18:46:38.342107       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0823 18:46:38.342410       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0823 18:46:38.342449       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0823 18:46:38.342471       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0823 18:46:38.442956       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0823 18:47:04.262730       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0823 18:47:04.262736       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [fa4d5839e41c] <==
	* I0823 18:47:18.527244       1 serving.go:348] Generated self-signed cert in-memory
	W0823 18:47:20.322299       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0823 18:47:20.322310       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0823 18:47:20.322314       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0823 18:47:20.322317       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0823 18:47:20.356305       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0823 18:47:20.356372       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0823 18:47:20.358796       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0823 18:47:20.358914       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0823 18:47:20.359597       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0823 18:47:20.359607       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0823 18:47:20.460625       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-23 18:45:46 UTC, ends at Wed 2023-08-23 18:48:34 UTC. --
	Aug 23 18:48:14 functional-840000 kubelet[7123]: I0823 18:48:14.996738    7123 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a94110ac-3bed-4666-92a4-0c37700de822\" (UniqueName: \"kubernetes.io/host-path/43323487-aecf-4506-a291-914cc6114d81-pvc-a94110ac-3bed-4666-92a4-0c37700de822\") pod \"sp-pod\" (UID: \"43323487-aecf-4506-a291-914cc6114d81\") " pod="default/sp-pod"
	Aug 23 18:48:14 functional-840000 kubelet[7123]: I0823 18:48:14.996785    7123 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlnhv\" (UniqueName: \"kubernetes.io/projected/43323487-aecf-4506-a291-914cc6114d81-kube-api-access-rlnhv\") pod \"sp-pod\" (UID: \"43323487-aecf-4506-a291-914cc6114d81\") " pod="default/sp-pod"
	Aug 23 18:48:15 functional-840000 kubelet[7123]: I0823 18:48:15.120024    7123 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="47832e82-c0be-40e6-acc0-fe611924a4c0" path="/var/lib/kubelet/pods/47832e82-c0be-40e6-acc0-fe611924a4c0/volumes"
	Aug 23 18:48:17 functional-840000 kubelet[7123]: I0823 18:48:17.178446    7123 scope.go:117] "RemoveContainer" containerID="c2e054a69497b91a2337bbdc8098e96388908d84c3d6ef9cf8e9c9defa3cf8f5"
	Aug 23 18:48:17 functional-840000 kubelet[7123]: E0823 18:48:17.227207    7123 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 23 18:48:17 functional-840000 kubelet[7123]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 23 18:48:17 functional-840000 kubelet[7123]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 23 18:48:17 functional-840000 kubelet[7123]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 23 18:48:20 functional-840000 kubelet[7123]: I0823 18:48:20.117782    7123 scope.go:117] "RemoveContainer" containerID="b474473d18537fb405e9e565d51fdf8053525a84fbc4617d24e0058b54d0ae0b"
	Aug 23 18:48:20 functional-840000 kubelet[7123]: E0823 18:48:20.117916    7123 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-gxhsb_default(8d60ff83-4af8-48f9-ad61-c3e7ae72b39a)\"" pod="default/hello-node-759d89bdcc-gxhsb" podUID="8d60ff83-4af8-48f9-ad61-c3e7ae72b39a"
	Aug 23 18:48:20 functional-840000 kubelet[7123]: I0823 18:48:20.122034    7123 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=5.011226016 podCreationTimestamp="2023-08-23 18:48:14 +0000 UTC" firstStartedPulling="2023-08-23 18:48:15.35907164 +0000 UTC m=+58.321466328" lastFinishedPulling="2023-08-23 18:48:16.469857858 +0000 UTC m=+59.432252588" observedRunningTime="2023-08-23 18:48:16.82104688 +0000 UTC m=+59.783441610" watchObservedRunningTime="2023-08-23 18:48:20.122012276 +0000 UTC m=+63.084407006"
	Aug 23 18:48:23 functional-840000 kubelet[7123]: I0823 18:48:23.117592    7123 scope.go:117] "RemoveContainer" containerID="30d1d1fa34ce1553696bf4c098ed35b9de3aa9fb2f7d64bb2a9645773d846045"
	Aug 23 18:48:23 functional-840000 kubelet[7123]: I0823 18:48:23.543546    7123 topology_manager.go:215] "Topology Admit Handler" podUID="0a9c04a9-81b0-43c2-9b8a-ade96c7da883" podNamespace="default" podName="busybox-mount"
	Aug 23 18:48:23 functional-840000 kubelet[7123]: I0823 18:48:23.653111    7123 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm6mg\" (UniqueName: \"kubernetes.io/projected/0a9c04a9-81b0-43c2-9b8a-ade96c7da883-kube-api-access-jm6mg\") pod \"busybox-mount\" (UID: \"0a9c04a9-81b0-43c2-9b8a-ade96c7da883\") " pod="default/busybox-mount"
	Aug 23 18:48:23 functional-840000 kubelet[7123]: I0823 18:48:23.653136    7123 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/0a9c04a9-81b0-43c2-9b8a-ade96c7da883-test-volume\") pod \"busybox-mount\" (UID: \"0a9c04a9-81b0-43c2-9b8a-ade96c7da883\") " pod="default/busybox-mount"
	Aug 23 18:48:23 functional-840000 kubelet[7123]: I0823 18:48:23.852306    7123 scope.go:117] "RemoveContainer" containerID="30d1d1fa34ce1553696bf4c098ed35b9de3aa9fb2f7d64bb2a9645773d846045"
	Aug 23 18:48:23 functional-840000 kubelet[7123]: I0823 18:48:23.852580    7123 scope.go:117] "RemoveContainer" containerID="3ea5ecff60421627086eedb618769fef0c502cf23cd021ba3f99ef39201fd149"
	Aug 23 18:48:23 functional-840000 kubelet[7123]: E0823 18:48:23.853381    7123 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-j254w_default(558e2112-fcc7-4352-8c25-6ce8c8be8405)\"" pod="default/hello-node-connect-7799dfb7c6-j254w" podUID="558e2112-fcc7-4352-8c25-6ce8c8be8405"
	Aug 23 18:48:27 functional-840000 kubelet[7123]: I0823 18:48:27.074927    7123 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/0a9c04a9-81b0-43c2-9b8a-ade96c7da883-test-volume\") pod \"0a9c04a9-81b0-43c2-9b8a-ade96c7da883\" (UID: \"0a9c04a9-81b0-43c2-9b8a-ade96c7da883\") "
	Aug 23 18:48:27 functional-840000 kubelet[7123]: I0823 18:48:27.074951    7123 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm6mg\" (UniqueName: \"kubernetes.io/projected/0a9c04a9-81b0-43c2-9b8a-ade96c7da883-kube-api-access-jm6mg\") pod \"0a9c04a9-81b0-43c2-9b8a-ade96c7da883\" (UID: \"0a9c04a9-81b0-43c2-9b8a-ade96c7da883\") "
	Aug 23 18:48:27 functional-840000 kubelet[7123]: I0823 18:48:27.075149    7123 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a9c04a9-81b0-43c2-9b8a-ade96c7da883-test-volume" (OuterVolumeSpecName: "test-volume") pod "0a9c04a9-81b0-43c2-9b8a-ade96c7da883" (UID: "0a9c04a9-81b0-43c2-9b8a-ade96c7da883"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 23 18:48:27 functional-840000 kubelet[7123]: I0823 18:48:27.077385    7123 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a9c04a9-81b0-43c2-9b8a-ade96c7da883-kube-api-access-jm6mg" (OuterVolumeSpecName: "kube-api-access-jm6mg") pod "0a9c04a9-81b0-43c2-9b8a-ade96c7da883" (UID: "0a9c04a9-81b0-43c2-9b8a-ade96c7da883"). InnerVolumeSpecName "kube-api-access-jm6mg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 23 18:48:27 functional-840000 kubelet[7123]: I0823 18:48:27.176120    7123 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/0a9c04a9-81b0-43c2-9b8a-ade96c7da883-test-volume\") on node \"functional-840000\" DevicePath \"\""
	Aug 23 18:48:27 functional-840000 kubelet[7123]: I0823 18:48:27.176136    7123 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jm6mg\" (UniqueName: \"kubernetes.io/projected/0a9c04a9-81b0-43c2-9b8a-ade96c7da883-kube-api-access-jm6mg\") on node \"functional-840000\" DevicePath \"\""
	Aug 23 18:48:27 functional-840000 kubelet[7123]: I0823 18:48:27.883598    7123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="014fc4c3525bc5633e40f1347b294cd283f2bace3529b2214e9c771c9ec966b1"
	
	* 
	* ==> storage-provisioner [0ad4dacac9b7] <==
	* I0823 18:46:50.655911       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0823 18:46:50.659925       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0823 18:46:50.659942       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [309dfb63643f] <==
	* I0823 18:47:21.742350       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0823 18:47:21.748347       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0823 18:47:21.748366       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0823 18:47:39.143288       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0823 18:47:39.143363       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-840000_33bf7c88-009c-4789-b18e-55f3f6a57ae8!
	I0823 18:47:39.143705       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1f2b6dd0-0296-4efe-ab56-6d8776960404", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-840000_33bf7c88-009c-4789-b18e-55f3f6a57ae8 became leader
	I0823 18:47:39.243580       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-840000_33bf7c88-009c-4789-b18e-55f3f6a57ae8!
	I0823 18:48:02.957596       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0823 18:48:02.957626       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    642eb748-cf33-4342-b05c-33fcf1582ea8 390 0 2023-08-23 18:46:18 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-08-23 18:46:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-a94110ac-3bed-4666-92a4-0c37700de822 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  a94110ac-3bed-4666-92a4-0c37700de822 712 0 2023-08-23 18:48:02 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-08-23 18:48:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-08-23 18:48:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0823 18:48:02.958191       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"a94110ac-3bed-4666-92a4-0c37700de822", APIVersion:"v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0823 18:48:02.958321       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-a94110ac-3bed-4666-92a4-0c37700de822" provisioned
	I0823 18:48:02.958357       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0823 18:48:02.958380       1 volume_store.go:212] Trying to save persistentvolume "pvc-a94110ac-3bed-4666-92a4-0c37700de822"
	I0823 18:48:02.965146       1 volume_store.go:219] persistentvolume "pvc-a94110ac-3bed-4666-92a4-0c37700de822" saved
	I0823 18:48:02.966125       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"a94110ac-3bed-4666-92a4-0c37700de822", APIVersion:"v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-a94110ac-3bed-4666-92a4-0c37700de822
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-840000 -n functional-840000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-840000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-840000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-840000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-840000/192.168.105.4
	Start Time:       Wed, 23 Aug 2023 11:48:23 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://2ab7ecb21d7e0f4325dcdf586dc5c751ab6cdceed717e59ba8171224e4efb45b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 23 Aug 2023 11:48:25 -0700
	      Finished:     Wed, 23 Aug 2023 11:48:25 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jm6mg (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-jm6mg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  11s   default-scheduler  Successfully assigned default/busybox-mount to functional-840000
	  Normal  Pulling    10s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.554s (1.554s including waiting)
	  Normal  Created    9s    kubelet            Created container mount-munger
	  Normal  Started    9s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (28.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-840000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-840000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 80. stderr: I0823 11:47:56.365906    2888 out.go:296] Setting OutFile to fd 1 ...
I0823 11:47:56.366111    2888 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 11:47:56.366115    2888 out.go:309] Setting ErrFile to fd 2...
I0823 11:47:56.366118    2888 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 11:47:56.366234    2888 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
I0823 11:47:56.366483    2888 mustload.go:65] Loading cluster: functional-840000
I0823 11:47:56.366658    2888 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
I0823 11:47:56.371043    2888 out.go:177] 
W0823 11:47:56.374033    2888 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/functional-840000/monitor: connect: connection refused
X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/functional-840000/monitor: connect: connection refused
W0823 11:47:56.374039    2888 out.go:239] * 
* 
W0823 11:47:56.375484    2888 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0823 11:47:56.378862    2888 out.go:177] 

                                                
                                                
stdout: 

                                                
                                                
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-840000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-840000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-840000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-840000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2887: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-840000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-840000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.17s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-056000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-056000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 88cc83840373
	Removing intermediate container 88cc83840373
	 ---> 271e12933d90
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 1b00a0c02f81
	Removing intermediate container 1b00a0c02f81
	 ---> 1dfc3dbb74d1
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 478a24faac81
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-056000 -n image-056000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-056000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-840000 ssh findmnt            | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-840000 ssh findmnt            | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-840000 ssh findmnt            | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| start          | -p functional-840000                     | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-840000 --dry-run           | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-840000                     | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                       | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | -p functional-840000                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	| ssh            | functional-840000 ssh findmnt            | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-840000 ssh findmnt            | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-840000 ssh findmnt            | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-840000 ssh findmnt            | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| update-context | functional-840000                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-840000                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-840000                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| image          | functional-840000                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | image ls --format short                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-840000                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | image ls --format yaml                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| ssh            | functional-840000 ssh pgrep              | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|                | buildkitd                                |                   |         |         |                     |                     |
	| image          | functional-840000 image build -t         | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | localhost/my-image:functional-840000     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                   |         |         |                     |                     |
	| image          | functional-840000 image ls               | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	| image          | functional-840000                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | image ls --format json                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-840000                        | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | image ls --format table                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| delete         | -p functional-840000                     | functional-840000 | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	| start          | -p image-056000 --driver=qemu2           | image-056000      | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:49 PDT |
	|                |                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-056000      | jenkins | v1.31.2 | 23 Aug 23 11:49 PDT | 23 Aug 23 11:49 PDT |
	|                | ./testdata/image-build/test-normal       |                   |         |         |                     |                     |
	|                | -p image-056000                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-056000      | jenkins | v1.31.2 | 23 Aug 23 11:49 PDT | 23 Aug 23 11:49 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                   |         |         |                     |                     |
	|                | image-056000                             |                   |         |         |                     |                     |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/23 11:48:48
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0823 11:48:48.190770    3113 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:48:48.190867    3113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:48:48.190869    3113 out.go:309] Setting ErrFile to fd 2...
	I0823 11:48:48.190871    3113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:48:48.190974    3113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:48:48.191971    3113 out.go:303] Setting JSON to false
	I0823 11:48:48.208157    3113 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2902,"bootTime":1692813626,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:48:48.208211    3113 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:48:48.211832    3113 out.go:177] * [image-056000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:48:48.222831    3113 notify.go:220] Checking for updates...
	I0823 11:48:48.226844    3113 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:48:48.227805    3113 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:48:48.230894    3113 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:48:48.233892    3113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:48:48.236877    3113 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:48:48.239846    3113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:48:48.243066    3113 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:48:48.246843    3113 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 11:48:48.253846    3113 start.go:298] selected driver: qemu2
	I0823 11:48:48.253850    3113 start.go:902] validating driver "qemu2" against <nil>
	I0823 11:48:48.253856    3113 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:48:48.253918    3113 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 11:48:48.256924    3113 out.go:177] * Automatically selected the socket_vmnet network
	I0823 11:48:48.262145    3113 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0823 11:48:48.262238    3113 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0823 11:48:48.262255    3113 cni.go:84] Creating CNI manager for ""
	I0823 11:48:48.262260    3113 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:48:48.262262    3113 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 11:48:48.262269    3113 start_flags.go:319] config:
	{Name:image-056000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:image-056000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:48:48.266632    3113 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:48:48.273692    3113 out.go:177] * Starting control plane node image-056000 in cluster image-056000
	I0823 11:48:48.277828    3113 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:48:48.277844    3113 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 11:48:48.277860    3113 cache.go:57] Caching tarball of preloaded images
	I0823 11:48:48.277925    3113 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 11:48:48.277929    3113 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 11:48:48.278177    3113 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/config.json ...
	I0823 11:48:48.278190    3113 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/config.json: {Name:mkbd5bb28cc036f17bb5e32cc9d953ed5e0f6362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:48:48.278387    3113 start.go:365] acquiring machines lock for image-056000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:48:48.278414    3113 start.go:369] acquired machines lock for "image-056000" in 23.625µs
	I0823 11:48:48.278423    3113 start.go:93] Provisioning new machine with config: &{Name:image-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.0 ClusterName:image-056000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:48:48.278455    3113 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:48:48.285850    3113 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0823 11:48:48.307759    3113 start.go:159] libmachine.API.Create for "image-056000" (driver="qemu2")
	I0823 11:48:48.307806    3113 client.go:168] LocalClient.Create starting
	I0823 11:48:48.308132    3113 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:48:48.308169    3113 main.go:141] libmachine: Decoding PEM data...
	I0823 11:48:48.308179    3113 main.go:141] libmachine: Parsing certificate...
	I0823 11:48:48.308225    3113 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:48:48.308244    3113 main.go:141] libmachine: Decoding PEM data...
	I0823 11:48:48.308255    3113 main.go:141] libmachine: Parsing certificate...
	I0823 11:48:48.308577    3113 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:48:48.495266    3113 main.go:141] libmachine: Creating SSH key...
	I0823 11:48:48.579894    3113 main.go:141] libmachine: Creating Disk image...
	I0823 11:48:48.579898    3113 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:48:48.580031    3113 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/image-056000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/image-056000/disk.qcow2
	I0823 11:48:48.588984    3113 main.go:141] libmachine: STDOUT: 
	I0823 11:48:48.588996    3113 main.go:141] libmachine: STDERR: 
	I0823 11:48:48.589051    3113 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/image-056000/disk.qcow2 +20000M
	I0823 11:48:48.596148    3113 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:48:48.596158    3113 main.go:141] libmachine: STDERR: 
	I0823 11:48:48.596174    3113 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/image-056000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/image-056000/disk.qcow2
	I0823 11:48:48.596181    3113 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:48:48.596225    3113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/image-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/image-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/image-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:84:74:00:74:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/image-056000/disk.qcow2
	I0823 11:48:48.631157    3113 main.go:141] libmachine: STDOUT: 
	I0823 11:48:48.631175    3113 main.go:141] libmachine: STDERR: 
	I0823 11:48:48.631178    3113 main.go:141] libmachine: Attempt 0
	I0823 11:48:48.631190    3113 main.go:141] libmachine: Searching for 8e:84:74:0:74:3 in /var/db/dhcpd_leases ...
	I0823 11:48:48.631272    3113 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0823 11:48:48.631286    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:4c:61:a4:c3:43 ID:1,f2:4c:61:a4:c3:43 Lease:0x64e7a55a}
	I0823 11:48:48.631294    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:35:71:90:ad:e4 ID:1,ae:35:71:90:ad:e4 Lease:0x64e653ce}
	I0823 11:48:48.631298    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e653a3}
	I0823 11:48:50.633431    3113 main.go:141] libmachine: Attempt 1
	I0823 11:48:50.633505    3113 main.go:141] libmachine: Searching for 8e:84:74:0:74:3 in /var/db/dhcpd_leases ...
	I0823 11:48:50.634040    3113 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0823 11:48:50.634094    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:4c:61:a4:c3:43 ID:1,f2:4c:61:a4:c3:43 Lease:0x64e7a55a}
	I0823 11:48:50.634122    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:35:71:90:ad:e4 ID:1,ae:35:71:90:ad:e4 Lease:0x64e653ce}
	I0823 11:48:50.634182    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e653a3}
	I0823 11:48:52.636306    3113 main.go:141] libmachine: Attempt 2
	I0823 11:48:52.636328    3113 main.go:141] libmachine: Searching for 8e:84:74:0:74:3 in /var/db/dhcpd_leases ...
	I0823 11:48:52.636482    3113 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0823 11:48:52.636492    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:4c:61:a4:c3:43 ID:1,f2:4c:61:a4:c3:43 Lease:0x64e7a55a}
	I0823 11:48:52.636497    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:35:71:90:ad:e4 ID:1,ae:35:71:90:ad:e4 Lease:0x64e653ce}
	I0823 11:48:52.636501    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e653a3}
	I0823 11:48:54.638545    3113 main.go:141] libmachine: Attempt 3
	I0823 11:48:54.638580    3113 main.go:141] libmachine: Searching for 8e:84:74:0:74:3 in /var/db/dhcpd_leases ...
	I0823 11:48:54.638625    3113 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0823 11:48:54.638630    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:4c:61:a4:c3:43 ID:1,f2:4c:61:a4:c3:43 Lease:0x64e7a55a}
	I0823 11:48:54.638642    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:35:71:90:ad:e4 ID:1,ae:35:71:90:ad:e4 Lease:0x64e653ce}
	I0823 11:48:54.638649    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e653a3}
	I0823 11:48:56.640644    3113 main.go:141] libmachine: Attempt 4
	I0823 11:48:56.640672    3113 main.go:141] libmachine: Searching for 8e:84:74:0:74:3 in /var/db/dhcpd_leases ...
	I0823 11:48:56.640776    3113 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0823 11:48:56.640785    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:4c:61:a4:c3:43 ID:1,f2:4c:61:a4:c3:43 Lease:0x64e7a55a}
	I0823 11:48:56.640789    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:35:71:90:ad:e4 ID:1,ae:35:71:90:ad:e4 Lease:0x64e653ce}
	I0823 11:48:56.640794    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e653a3}
	I0823 11:48:58.642833    3113 main.go:141] libmachine: Attempt 5
	I0823 11:48:58.642848    3113 main.go:141] libmachine: Searching for 8e:84:74:0:74:3 in /var/db/dhcpd_leases ...
	I0823 11:48:58.642945    3113 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0823 11:48:58.642953    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:4c:61:a4:c3:43 ID:1,f2:4c:61:a4:c3:43 Lease:0x64e7a55a}
	I0823 11:48:58.642958    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:35:71:90:ad:e4 ID:1,ae:35:71:90:ad:e4 Lease:0x64e653ce}
	I0823 11:48:58.642968    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e653a3}
	I0823 11:49:00.644995    3113 main.go:141] libmachine: Attempt 6
	I0823 11:49:00.645015    3113 main.go:141] libmachine: Searching for 8e:84:74:0:74:3 in /var/db/dhcpd_leases ...
	I0823 11:49:00.645134    3113 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0823 11:49:00.645148    3113 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:84:74:0:74:3 ID:1,8e:84:74:0:74:3 Lease:0x64e7a61b}
	I0823 11:49:00.645151    3113 main.go:141] libmachine: Found match: 8e:84:74:0:74:3
	I0823 11:49:00.645162    3113 main.go:141] libmachine: IP: 192.168.105.5
	I0823 11:49:00.645167    3113 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0823 11:49:01.694002    3113 machine.go:88] provisioning docker machine ...
	I0823 11:49:01.694020    3113 buildroot.go:166] provisioning hostname "image-056000"
	I0823 11:49:01.694062    3113 main.go:141] libmachine: Using SSH client type: native
	I0823 11:49:01.694330    3113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ad21e0] 0x104ad4c40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0823 11:49:01.694334    3113 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-056000 && echo "image-056000" | sudo tee /etc/hostname
	I0823 11:49:01.751832    3113 main.go:141] libmachine: SSH cmd err, output: <nil>: image-056000
	
	I0823 11:49:01.751886    3113 main.go:141] libmachine: Using SSH client type: native
	I0823 11:49:01.752131    3113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ad21e0] 0x104ad4c40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0823 11:49:01.752138    3113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-056000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-056000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-056000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0823 11:49:01.806531    3113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0823 11:49:01.806539    3113 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17086-1133/.minikube CaCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17086-1133/.minikube}
	I0823 11:49:01.806544    3113 buildroot.go:174] setting up certificates
	I0823 11:49:01.806552    3113 provision.go:83] configureAuth start
	I0823 11:49:01.806554    3113 provision.go:138] copyHostCerts
	I0823 11:49:01.806614    3113 exec_runner.go:144] found /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.pem, removing ...
	I0823 11:49:01.806618    3113 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.pem
	I0823 11:49:01.806725    3113 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.pem (1078 bytes)
	I0823 11:49:01.806903    3113 exec_runner.go:144] found /Users/jenkins/minikube-integration/17086-1133/.minikube/cert.pem, removing ...
	I0823 11:49:01.806904    3113 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17086-1133/.minikube/cert.pem
	I0823 11:49:01.806986    3113 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/cert.pem (1123 bytes)
	I0823 11:49:01.807081    3113 exec_runner.go:144] found /Users/jenkins/minikube-integration/17086-1133/.minikube/key.pem, removing ...
	I0823 11:49:01.807083    3113 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17086-1133/.minikube/key.pem
	I0823 11:49:01.807120    3113 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/key.pem (1679 bytes)
	I0823 11:49:01.807193    3113 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem org=jenkins.image-056000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-056000]
	I0823 11:49:01.898228    3113 provision.go:172] copyRemoteCerts
	I0823 11:49:01.898255    3113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0823 11:49:01.898260    3113 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/image-056000/id_rsa Username:docker}
	I0823 11:49:01.925665    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0823 11:49:01.932701    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0823 11:49:01.939536    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0823 11:49:01.946581    3113 provision.go:86] duration metric: configureAuth took 140.025417ms
	I0823 11:49:01.946586    3113 buildroot.go:189] setting minikube options for container-runtime
	I0823 11:49:01.946682    3113 config.go:182] Loaded profile config "image-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:49:01.946713    3113 main.go:141] libmachine: Using SSH client type: native
	I0823 11:49:01.946929    3113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ad21e0] 0x104ad4c40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0823 11:49:01.946932    3113 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0823 11:49:02.000565    3113 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0823 11:49:02.000568    3113 buildroot.go:70] root file system type: tmpfs
	I0823 11:49:02.000621    3113 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0823 11:49:02.000661    3113 main.go:141] libmachine: Using SSH client type: native
	I0823 11:49:02.000885    3113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ad21e0] 0x104ad4c40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0823 11:49:02.000921    3113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0823 11:49:02.056295    3113 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0823 11:49:02.056340    3113 main.go:141] libmachine: Using SSH client type: native
	I0823 11:49:02.056572    3113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ad21e0] 0x104ad4c40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0823 11:49:02.056578    3113 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0823 11:49:02.402748    3113 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0823 11:49:02.402758    3113 machine.go:91] provisioned docker machine in 708.768959ms
	I0823 11:49:02.402763    3113 client.go:171] LocalClient.Create took 14.095304708s
	I0823 11:49:02.402782    3113 start.go:167] duration metric: libmachine.API.Create for "image-056000" took 14.095388375s
	I0823 11:49:02.402785    3113 start.go:300] post-start starting for "image-056000" (driver="qemu2")
	I0823 11:49:02.402789    3113 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0823 11:49:02.402860    3113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0823 11:49:02.402868    3113 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/image-056000/id_rsa Username:docker}
	I0823 11:49:02.431406    3113 ssh_runner.go:195] Run: cat /etc/os-release
	I0823 11:49:02.432888    3113 info.go:137] Remote host: Buildroot 2021.02.12
	I0823 11:49:02.432906    3113 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/addons for local assets ...
	I0823 11:49:02.432991    3113 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/files for local assets ...
	I0823 11:49:02.433095    3113 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0823 11:49:02.433213    3113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0823 11:49:02.435804    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0823 11:49:02.442642    3113 start.go:303] post-start completed in 39.8545ms
	I0823 11:49:02.443045    3113 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/config.json ...
	I0823 11:49:02.443209    3113 start.go:128] duration metric: createHost completed in 14.165103958s
	I0823 11:49:02.443237    3113 main.go:141] libmachine: Using SSH client type: native
	I0823 11:49:02.443457    3113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ad21e0] 0x104ad4c40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0823 11:49:02.443460    3113 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0823 11:49:02.496230    3113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692816542.459285043
	
	I0823 11:49:02.496234    3113 fix.go:206] guest clock: 1692816542.459285043
	I0823 11:49:02.496237    3113 fix.go:219] Guest: 2023-08-23 11:49:02.459285043 -0700 PDT Remote: 2023-08-23 11:49:02.443215 -0700 PDT m=+14.272414584 (delta=16.070043ms)
	I0823 11:49:02.496246    3113 fix.go:190] guest clock delta is within tolerance: 16.070043ms
	I0823 11:49:02.496247    3113 start.go:83] releasing machines lock for "image-056000", held for 14.218184958s
	I0823 11:49:02.496516    3113 ssh_runner.go:195] Run: cat /version.json
	I0823 11:49:02.496516    3113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0823 11:49:02.496522    3113 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/image-056000/id_rsa Username:docker}
	I0823 11:49:02.496535    3113 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/image-056000/id_rsa Username:docker}
	I0823 11:49:02.565346    3113 ssh_runner.go:195] Run: systemctl --version
	I0823 11:49:02.567534    3113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0823 11:49:02.569379    3113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0823 11:49:02.569428    3113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0823 11:49:02.574406    3113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0823 11:49:02.574410    3113 start.go:466] detecting cgroup driver to use...
	I0823 11:49:02.574470    3113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:49:02.579977    3113 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0823 11:49:02.582950    3113 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0823 11:49:02.586138    3113 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0823 11:49:02.586161    3113 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0823 11:49:02.589484    3113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:49:02.592428    3113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0823 11:49:02.595224    3113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:49:02.598408    3113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0823 11:49:02.601738    3113 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0823 11:49:02.604634    3113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0823 11:49:02.607274    3113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0823 11:49:02.610305    3113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:49:02.688415    3113 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0823 11:49:02.697262    3113 start.go:466] detecting cgroup driver to use...
	I0823 11:49:02.697320    3113 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0823 11:49:02.702906    3113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:49:02.708313    3113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0823 11:49:02.713890    3113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:49:02.718475    3113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 11:49:02.723694    3113 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0823 11:49:02.766242    3113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 11:49:02.771686    3113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:49:02.777296    3113 ssh_runner.go:195] Run: which cri-dockerd
	I0823 11:49:02.778558    3113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0823 11:49:02.781522    3113 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0823 11:49:02.786650    3113 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0823 11:49:02.873428    3113 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0823 11:49:02.946640    3113 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0823 11:49:02.946661    3113 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0823 11:49:02.952184    3113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:49:03.025099    3113 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0823 11:49:04.190610    3113 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.165530541s)
	I0823 11:49:04.190671    3113 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0823 11:49:04.267737    3113 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0823 11:49:04.346015    3113 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0823 11:49:04.425281    3113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:49:04.500161    3113 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0823 11:49:04.507231    3113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:49:04.580038    3113 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0823 11:49:04.606262    3113 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0823 11:49:04.606347    3113 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0823 11:49:04.608393    3113 start.go:534] Will wait 60s for crictl version
	I0823 11:49:04.608426    3113 ssh_runner.go:195] Run: which crictl
	I0823 11:49:04.609703    3113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0823 11:49:04.626396    3113 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0823 11:49:04.626453    3113 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:49:04.635802    3113 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:49:04.646374    3113 out.go:204] * Preparing Kubernetes v1.28.0 on Docker 24.0.4 ...
	I0823 11:49:04.646454    3113 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0823 11:49:04.647749    3113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 11:49:04.651340    3113 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:49:04.651379    3113 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:49:04.660551    3113 docker.go:636] Got preloaded images: 
	I0823 11:49:04.660556    3113 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.0 wasn't preloaded
	I0823 11:49:04.660594    3113 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0823 11:49:04.663467    3113 ssh_runner.go:195] Run: which lz4
	I0823 11:49:04.664811    3113 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0823 11:49:04.666028    3113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0823 11:49:04.666039    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356920680 bytes)
	I0823 11:49:05.982465    3113 docker.go:600] Took 1.317729 seconds to copy over tarball
	I0823 11:49:05.982521    3113 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0823 11:49:07.019354    3113 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.036849875s)
	I0823 11:49:07.019363    3113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0823 11:49:07.035338    3113 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0823 11:49:07.038626    3113 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0823 11:49:07.043724    3113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:49:07.120609    3113 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0823 11:49:08.573961    3113 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4533825s)
	I0823 11:49:08.574037    3113 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:49:08.584998    3113 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0823 11:49:08.585004    3113 cache_images.go:84] Images are preloaded, skipping loading
	I0823 11:49:08.585062    3113 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0823 11:49:08.592581    3113 cni.go:84] Creating CNI manager for ""
	I0823 11:49:08.592586    3113 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:49:08.592596    3113 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0823 11:49:08.592603    3113 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-056000 NodeName:image-056000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0823 11:49:08.592668    3113 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-056000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0823 11:49:08.592700    3113 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-056000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:image-056000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0823 11:49:08.592749    3113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0823 11:49:08.595700    3113 binaries.go:44] Found k8s binaries, skipping transfer
	I0823 11:49:08.595731    3113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0823 11:49:08.598552    3113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0823 11:49:08.603682    3113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0823 11:49:08.608597    3113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0823 11:49:08.613706    3113 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0823 11:49:08.615076    3113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 11:49:08.618842    3113 certs.go:56] Setting up /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000 for IP: 192.168.105.5
	I0823 11:49:08.618850    3113 certs.go:190] acquiring lock for shared ca certs: {Name:mk376b7368cd0c31aa9f4550a2157cb5219e493f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:08.618992    3113 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key
	I0823 11:49:08.619028    3113 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key
	I0823 11:49:08.619051    3113 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/client.key
	I0823 11:49:08.619057    3113 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/client.crt with IP's: []
	I0823 11:49:08.718444    3113 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/client.crt ...
	I0823 11:49:08.718447    3113 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/client.crt: {Name:mk3d5225fba3b92fa4d920a8e6862cf4b245a489 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:08.718661    3113 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/client.key ...
	I0823 11:49:08.718663    3113 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/client.key: {Name:mk655d3a56080595f86017e7ff43133193976500 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:08.718774    3113 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/apiserver.key.e69b33ca
	I0823 11:49:08.718780    3113 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0823 11:49:08.798760    3113 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/apiserver.crt.e69b33ca ...
	I0823 11:49:08.798762    3113 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/apiserver.crt.e69b33ca: {Name:mk9a6ca7bbe44302cdf7e110e3ac1868727a440f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:08.798902    3113 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/apiserver.key.e69b33ca ...
	I0823 11:49:08.798904    3113 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/apiserver.key.e69b33ca: {Name:mka133125f6b50e0982e5fda0fb5db3375273577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:08.799005    3113 certs.go:337] copying /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/apiserver.crt
	I0823 11:49:08.799179    3113 certs.go:341] copying /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/apiserver.key
	I0823 11:49:08.799309    3113 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/proxy-client.key
	I0823 11:49:08.799315    3113 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/proxy-client.crt with IP's: []
	I0823 11:49:09.019583    3113 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/proxy-client.crt ...
	I0823 11:49:09.019588    3113 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/proxy-client.crt: {Name:mke7b8ea6db27b6596cd2423fc215727f0c79c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:09.019784    3113 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/proxy-client.key ...
	I0823 11:49:09.019786    3113 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/proxy-client.key: {Name:mkc061f7bb67d9dca6c88d35466ef6bacd6234a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:09.020006    3113 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/1554.pem (1338 bytes)
	W0823 11:49:09.020032    3113 certs.go:433] ignoring /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/1554_empty.pem, impossibly tiny 0 bytes
	I0823 11:49:09.020037    3113 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem (1675 bytes)
	I0823 11:49:09.020054    3113 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem (1078 bytes)
	I0823 11:49:09.020070    3113 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem (1123 bytes)
	I0823 11:49:09.020086    3113 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem (1679 bytes)
	I0823 11:49:09.020122    3113 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/15542.pem (1708 bytes)
	I0823 11:49:09.020421    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0823 11:49:09.027841    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0823 11:49:09.034604    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0823 11:49:09.041938    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/image-056000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0823 11:49:09.049088    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0823 11:49:09.055905    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0823 11:49:09.062585    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0823 11:49:09.069742    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0823 11:49:09.076805    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0823 11:49:09.083580    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/1554.pem --> /usr/share/ca-certificates/1554.pem (1338 bytes)
	I0823 11:49:09.090420    3113 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/15542.pem --> /usr/share/ca-certificates/15542.pem (1708 bytes)
	I0823 11:49:09.097505    3113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0823 11:49:09.102400    3113 ssh_runner.go:195] Run: openssl version
	I0823 11:49:09.104334    3113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554.pem && ln -fs /usr/share/ca-certificates/1554.pem /etc/ssl/certs/1554.pem"
	I0823 11:49:09.107291    3113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1554.pem
	I0823 11:49:09.108742    3113 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 23 18:45 /usr/share/ca-certificates/1554.pem
	I0823 11:49:09.108759    3113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554.pem
	I0823 11:49:09.110462    3113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554.pem /etc/ssl/certs/51391683.0"
	I0823 11:49:09.113581    3113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15542.pem && ln -fs /usr/share/ca-certificates/15542.pem /etc/ssl/certs/15542.pem"
	I0823 11:49:09.116549    3113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15542.pem
	I0823 11:49:09.118016    3113 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 23 18:45 /usr/share/ca-certificates/15542.pem
	I0823 11:49:09.118032    3113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15542.pem
	I0823 11:49:09.119866    3113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15542.pem /etc/ssl/certs/3ec20f2e.0"
	I0823 11:49:09.122883    3113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0823 11:49:09.126083    3113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:49:09.127572    3113 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 23 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:49:09.127590    3113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:49:09.129360    3113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0823 11:49:09.132105    3113 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0823 11:49:09.133351    3113 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0823 11:49:09.133383    3113 kubeadm.go:404] StartCluster: {Name:image-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName
:image-056000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:49:09.133446    3113 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0823 11:49:09.138798    3113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0823 11:49:09.142228    3113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0823 11:49:09.145294    3113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0823 11:49:09.147957    3113 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0823 11:49:09.147968    3113 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0823 11:49:09.170873    3113 kubeadm.go:322] [init] Using Kubernetes version: v1.28.0
	I0823 11:49:09.170977    3113 kubeadm.go:322] [preflight] Running pre-flight checks
	I0823 11:49:09.223843    3113 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0823 11:49:09.223894    3113 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0823 11:49:09.223952    3113 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0823 11:49:09.284882    3113 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0823 11:49:09.289076    3113 out.go:204]   - Generating certificates and keys ...
	I0823 11:49:09.289126    3113 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0823 11:49:09.289159    3113 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0823 11:49:09.374994    3113 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0823 11:49:09.516551    3113 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0823 11:49:09.570431    3113 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0823 11:49:09.666365    3113 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0823 11:49:09.760201    3113 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0823 11:49:09.760257    3113 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-056000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0823 11:49:09.814257    3113 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0823 11:49:09.814320    3113 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-056000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0823 11:49:09.931953    3113 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0823 11:49:10.016035    3113 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0823 11:49:10.058564    3113 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0823 11:49:10.058588    3113 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0823 11:49:10.091024    3113 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0823 11:49:10.142267    3113 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0823 11:49:10.375703    3113 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0823 11:49:10.533124    3113 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0823 11:49:10.533335    3113 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0823 11:49:10.534520    3113 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0823 11:49:10.544622    3113 out.go:204]   - Booting up control plane ...
	I0823 11:49:10.544678    3113 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0823 11:49:10.544729    3113 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0823 11:49:10.544769    3113 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0823 11:49:10.544814    3113 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0823 11:49:10.544859    3113 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0823 11:49:10.544883    3113 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0823 11:49:10.634737    3113 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0823 11:49:14.637041    3113 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002415 seconds
	I0823 11:49:14.637124    3113 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0823 11:49:14.642278    3113 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0823 11:49:15.151163    3113 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0823 11:49:15.151264    3113 kubeadm.go:322] [mark-control-plane] Marking the node image-056000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0823 11:49:15.656717    3113 kubeadm.go:322] [bootstrap-token] Using token: 4dgjcf.07flmx3k7u9cku6a
	I0823 11:49:15.663122    3113 out.go:204]   - Configuring RBAC rules ...
	I0823 11:49:15.663169    3113 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0823 11:49:15.664056    3113 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0823 11:49:15.670338    3113 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0823 11:49:15.671269    3113 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0823 11:49:15.672422    3113 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0823 11:49:15.673497    3113 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0823 11:49:15.677561    3113 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0823 11:49:15.865065    3113 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0823 11:49:16.066026    3113 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0823 11:49:16.066353    3113 kubeadm.go:322] 
	I0823 11:49:16.066377    3113 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0823 11:49:16.066379    3113 kubeadm.go:322] 
	I0823 11:49:16.066410    3113 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0823 11:49:16.066412    3113 kubeadm.go:322] 
	I0823 11:49:16.066422    3113 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0823 11:49:16.066444    3113 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0823 11:49:16.066464    3113 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0823 11:49:16.066465    3113 kubeadm.go:322] 
	I0823 11:49:16.066488    3113 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0823 11:49:16.066489    3113 kubeadm.go:322] 
	I0823 11:49:16.066517    3113 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0823 11:49:16.066518    3113 kubeadm.go:322] 
	I0823 11:49:16.066540    3113 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0823 11:49:16.066573    3113 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0823 11:49:16.066610    3113 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0823 11:49:16.066613    3113 kubeadm.go:322] 
	I0823 11:49:16.066649    3113 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0823 11:49:16.066688    3113 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0823 11:49:16.066690    3113 kubeadm.go:322] 
	I0823 11:49:16.066750    3113 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4dgjcf.07flmx3k7u9cku6a \
	I0823 11:49:16.066809    3113 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bfb9bdd14f4bb2775ce7c1686c5d9f5d740ad24201ba0b194f17c1c3700b6f \
	I0823 11:49:16.066819    3113 kubeadm.go:322] 	--control-plane 
	I0823 11:49:16.066820    3113 kubeadm.go:322] 
	I0823 11:49:16.066864    3113 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0823 11:49:16.066865    3113 kubeadm.go:322] 
	I0823 11:49:16.066913    3113 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4dgjcf.07flmx3k7u9cku6a \
	I0823 11:49:16.066959    3113 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:25bfb9bdd14f4bb2775ce7c1686c5d9f5d740ad24201ba0b194f17c1c3700b6f 
	I0823 11:49:16.067134    3113 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0823 11:49:16.067153    3113 cni.go:84] Creating CNI manager for ""
	I0823 11:49:16.067159    3113 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:49:16.073921    3113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0823 11:49:16.077760    3113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0823 11:49:16.081048    3113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0823 11:49:16.085894    3113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0823 11:49:16.085942    3113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:49:16.085952    3113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=b0cad40b2c48c073112e0845fd1037a030ab0128 minikube.k8s.io/name=image-056000 minikube.k8s.io/updated_at=2023_08_23T11_49_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:49:16.149255    3113 kubeadm.go:1081] duration metric: took 63.354208ms to wait for elevateKubeSystemPrivileges.
	I0823 11:49:16.149268    3113 ops.go:34] apiserver oom_adj: -16
	I0823 11:49:16.149274    3113 kubeadm.go:406] StartCluster complete in 7.016110584s
	I0823 11:49:16.149282    3113 settings.go:142] acquiring lock: {Name:mk727aa2e008442bafb3bca409aa11c730e46bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:16.149364    3113 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:49:16.149693    3113 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/kubeconfig: {Name:mk62eedfc40b21b153318958a231856d07330e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:16.149878    3113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0823 11:49:16.149912    3113 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0823 11:49:16.149937    3113 addons.go:69] Setting storage-provisioner=true in profile "image-056000"
	I0823 11:49:16.149943    3113 addons.go:231] Setting addon storage-provisioner=true in "image-056000"
	I0823 11:49:16.149948    3113 addons.go:69] Setting default-storageclass=true in profile "image-056000"
	I0823 11:49:16.149958    3113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-056000"
	I0823 11:49:16.150011    3113 host.go:66] Checking if "image-056000" exists ...
	I0823 11:49:16.150216    3113 config.go:182] Loaded profile config "image-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:49:16.154842    3113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 11:49:16.157810    3113 addons.go:231] Setting addon default-storageclass=true in "image-056000"
	I0823 11:49:16.158931    3113 host.go:66] Checking if "image-056000" exists ...
	I0823 11:49:16.158960    3113 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0823 11:49:16.158964    3113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0823 11:49:16.158972    3113 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/image-056000/id_rsa Username:docker}
	I0823 11:49:16.159706    3113 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0823 11:49:16.159709    3113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0823 11:49:16.159712    3113 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/image-056000/id_rsa Username:docker}
	I0823 11:49:16.161417    3113 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-056000" context rescaled to 1 replicas
	I0823 11:49:16.161427    3113 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:49:16.164881    3113 out.go:177] * Verifying Kubernetes components...
	I0823 11:49:16.172810    3113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 11:49:16.191900    3113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0823 11:49:16.192264    3113 api_server.go:52] waiting for apiserver process to appear ...
	I0823 11:49:16.192294    3113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 11:49:16.195963    3113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0823 11:49:16.199651    3113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0823 11:49:16.640454    3113 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0823 11:49:16.640471    3113 api_server.go:72] duration metric: took 479.050333ms to wait for apiserver process to appear ...
	I0823 11:49:16.640476    3113 api_server.go:88] waiting for apiserver healthz status ...
	I0823 11:49:16.640488    3113 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0823 11:49:16.643588    3113 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0823 11:49:16.644241    3113 api_server.go:141] control plane version: v1.28.0
	I0823 11:49:16.644245    3113 api_server.go:131] duration metric: took 3.767834ms to wait for apiserver health ...
	I0823 11:49:16.644247    3113 system_pods.go:43] waiting for kube-system pods to appear ...
	I0823 11:49:16.647107    3113 system_pods.go:59] 4 kube-system pods found
	I0823 11:49:16.647114    3113 system_pods.go:61] "etcd-image-056000" [0cd2e712-6b36-4b0f-86a6-1ddf92d3c4de] Pending
	I0823 11:49:16.647116    3113 system_pods.go:61] "kube-apiserver-image-056000" [af2900aa-dcc4-4b4b-96ea-d1938fbc097a] Pending
	I0823 11:49:16.647118    3113 system_pods.go:61] "kube-controller-manager-image-056000" [a9b230c3-c41a-455f-908d-463459ec7cc6] Pending
	I0823 11:49:16.647119    3113 system_pods.go:61] "kube-scheduler-image-056000" [5869c0ee-1277-4191-9ca8-81c86d2dd894] Pending
	I0823 11:49:16.647121    3113 system_pods.go:74] duration metric: took 2.872583ms to wait for pod list to return data ...
	I0823 11:49:16.647125    3113 kubeadm.go:581] duration metric: took 485.70575ms to wait for : map[apiserver:true system_pods:true] ...
	I0823 11:49:16.647130    3113 node_conditions.go:102] verifying NodePressure condition ...
	I0823 11:49:16.648637    3113 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0823 11:49:16.648645    3113 node_conditions.go:123] node cpu capacity is 2
	I0823 11:49:16.648650    3113 node_conditions.go:105] duration metric: took 1.518667ms to run NodePressure ...
	I0823 11:49:16.648654    3113 start.go:228] waiting for startup goroutines ...
	I0823 11:49:16.698076    3113 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0823 11:49:16.705940    3113 addons.go:502] enable addons completed in 556.048333ms: enabled=[storage-provisioner default-storageclass]
	I0823 11:49:16.705951    3113 start.go:233] waiting for cluster config update ...
	I0823 11:49:16.705955    3113 start.go:242] writing updated cluster config ...
	I0823 11:49:16.706186    3113 ssh_runner.go:195] Run: rm -f paused
	I0823 11:49:16.733796    3113 start.go:600] kubectl: 1.27.2, cluster: 1.28.0 (minor skew: 1)
	I0823 11:49:16.738018    3113 out.go:177] * Done! kubectl is now configured to use "image-056000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-08-23 18:48:59 UTC, ends at Wed 2023-08-23 18:49:19 UTC. --
	Aug 23 18:49:11 image-056000 dockerd[1110]: time="2023-08-23T18:49:11.598528047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:49:11 image-056000 dockerd[1110]: time="2023-08-23T18:49:11.598536881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:49:11 image-056000 dockerd[1110]: time="2023-08-23T18:49:11.608437297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:49:11 image-056000 dockerd[1110]: time="2023-08-23T18:49:11.608479922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:49:11 image-056000 dockerd[1110]: time="2023-08-23T18:49:11.608488381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:49:11 image-056000 dockerd[1110]: time="2023-08-23T18:49:11.608494631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:49:11 image-056000 dockerd[1110]: time="2023-08-23T18:49:11.612324047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:49:11 image-056000 dockerd[1110]: time="2023-08-23T18:49:11.612349589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:49:11 image-056000 dockerd[1110]: time="2023-08-23T18:49:11.612358797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:49:11 image-056000 dockerd[1110]: time="2023-08-23T18:49:11.612365089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:49:11 image-056000 dockerd[1110]: time="2023-08-23T18:49:11.634374131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:49:11 image-056000 dockerd[1110]: time="2023-08-23T18:49:11.634417047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:49:11 image-056000 dockerd[1110]: time="2023-08-23T18:49:11.634428547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:49:11 image-056000 dockerd[1110]: time="2023-08-23T18:49:11.634437172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:49:18 image-056000 dockerd[1104]: time="2023-08-23T18:49:18.417901217Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Aug 23 18:49:18 image-056000 dockerd[1104]: time="2023-08-23T18:49:18.531066967Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Aug 23 18:49:18 image-056000 dockerd[1104]: time="2023-08-23T18:49:18.546664509Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Aug 23 18:49:18 image-056000 dockerd[1110]: time="2023-08-23T18:49:18.594751842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:49:18 image-056000 dockerd[1110]: time="2023-08-23T18:49:18.594783467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:49:18 image-056000 dockerd[1110]: time="2023-08-23T18:49:18.594980467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:49:18 image-056000 dockerd[1110]: time="2023-08-23T18:49:18.594992467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:49:18 image-056000 dockerd[1104]: time="2023-08-23T18:49:18.728496259Z" level=info msg="ignoring event" container=478a24faac818ffdf338172e938e15edbf7b7c93e17196ab6737953ab0fa7cdd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:49:18 image-056000 dockerd[1110]: time="2023-08-23T18:49:18.728573134Z" level=info msg="shim disconnected" id=478a24faac818ffdf338172e938e15edbf7b7c93e17196ab6737953ab0fa7cdd namespace=moby
	Aug 23 18:49:18 image-056000 dockerd[1110]: time="2023-08-23T18:49:18.728597259Z" level=warning msg="cleaning up after shim disconnected" id=478a24faac818ffdf338172e938e15edbf7b7c93e17196ab6737953ab0fa7cdd namespace=moby
	Aug 23 18:49:18 image-056000 dockerd[1110]: time="2023-08-23T18:49:18.728601217Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	5353c4b97e292       762dce4090c5f       8 seconds ago       Running             kube-scheduler            0                   d85ed28a9db00
	3f01280f7a75f       9cdd6470f48c8       8 seconds ago       Running             etcd                      0                   82a50c204e8fe
	11e3dc19ea80a       46cc66ccc7c19       8 seconds ago       Running             kube-controller-manager   0                   a083d8f0724fb
	13882dc4a36a2       00543d2fe5d71       8 seconds ago       Running             kube-apiserver            0                   265b8d36ff63e
	
	* 
	* ==> describe nodes <==
	* Name:               image-056000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-056000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b0cad40b2c48c073112e0845fd1037a030ab0128
	                    minikube.k8s.io/name=image-056000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_23T11_49_16_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 23 Aug 2023 18:49:13 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-056000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 23 Aug 2023 18:49:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 23 Aug 2023 18:49:15 +0000   Wed, 23 Aug 2023 18:49:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 23 Aug 2023 18:49:15 +0000   Wed, 23 Aug 2023 18:49:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 23 Aug 2023 18:49:15 +0000   Wed, 23 Aug 2023 18:49:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 23 Aug 2023 18:49:15 +0000   Wed, 23 Aug 2023 18:49:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-056000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 f839fcae79bb4afebc5ad99bd5815582
	  System UUID:                f839fcae79bb4afebc5ad99bd5815582
	  Boot ID:                    a96b5aa1-36fe-43d1-a773-ef8f78017885
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-056000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4s
	  kube-system                 kube-apiserver-image-056000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-controller-manager-image-056000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-scheduler-image-056000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 4s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  4s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4s    kubelet  Node image-056000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s    kubelet  Node image-056000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s    kubelet  Node image-056000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Aug23 18:48] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.661240] EINJ: EINJ table not found.
	[  +0.527433] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043478] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000927] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Aug23 18:49] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.076046] systemd-fstab-generator[492]: Ignoring "noauto" for root device
	[  +0.419706] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.184622] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[  +0.074846] systemd-fstab-generator[718]: Ignoring "noauto" for root device
	[  +0.079355] systemd-fstab-generator[731]: Ignoring "noauto" for root device
	[  +1.149611] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.092389] systemd-fstab-generator[919]: Ignoring "noauto" for root device
	[  +0.077702] systemd-fstab-generator[930]: Ignoring "noauto" for root device
	[  +0.080145] systemd-fstab-generator[941]: Ignoring "noauto" for root device
	[  +0.072417] systemd-fstab-generator[952]: Ignoring "noauto" for root device
	[  +0.083030] systemd-fstab-generator[994]: Ignoring "noauto" for root device
	[  +2.539229] systemd-fstab-generator[1097]: Ignoring "noauto" for root device
	[  +3.507534] systemd-fstab-generator[1425]: Ignoring "noauto" for root device
	[  +0.299947] kauditd_printk_skb: 68 callbacks suppressed
	[  +4.826443] systemd-fstab-generator[2330]: Ignoring "noauto" for root device
	[  +2.883505] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [3f01280f7a75] <==
	* {"level":"info","ts":"2023-08-23T18:49:11.726166Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-08-23T18:49:11.726194Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-08-23T18:49:11.72652Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-23T18:49:11.726737Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-23T18:49:11.726782Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-23T18:49:11.727547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-08-23T18:49:11.727775Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-08-23T18:49:12.411342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-23T18:49:12.411371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-23T18:49:12.411384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-08-23T18:49:12.411406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-08-23T18:49:12.411409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-08-23T18:49:12.411415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-08-23T18:49:12.411419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-08-23T18:49:12.415207Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:49:12.415269Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-056000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-23T18:49:12.415347Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-23T18:49:12.415895Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-23T18:49:12.415353Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-23T18:49:12.419525Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-08-23T18:49:12.423232Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:49:12.4233Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:49:12.423326Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-23T18:49:12.42641Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-23T18:49:12.438885Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  18:49:19 up 0 min,  0 users,  load average: 0.32, 0.08, 0.03
	Linux image-056000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [13882dc4a36a] <==
	* I0823 18:49:13.093936       1 shared_informer.go:318] Caches are synced for configmaps
	I0823 18:49:13.094125       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0823 18:49:13.094149       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0823 18:49:13.094551       1 controller.go:624] quota admission added evaluator for: namespaces
	I0823 18:49:13.094847       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0823 18:49:13.094859       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0823 18:49:13.100752       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0823 18:49:13.115774       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0823 18:49:13.116055       1 aggregator.go:166] initial CRD sync complete...
	I0823 18:49:13.116104       1 autoregister_controller.go:141] Starting autoregister controller
	I0823 18:49:13.116123       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0823 18:49:13.116179       1 cache.go:39] Caches are synced for autoregister controller
	I0823 18:49:13.997917       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0823 18:49:13.999099       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0823 18:49:13.999108       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0823 18:49:14.145265       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0823 18:49:14.155487       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0823 18:49:14.199369       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0823 18:49:14.205357       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0823 18:49:14.205805       1 controller.go:624] quota admission added evaluator for: endpoints
	I0823 18:49:14.207253       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0823 18:49:15.036640       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0823 18:49:15.823581       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0823 18:49:15.827856       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0823 18:49:15.831763       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [11e3dc19ea80] <==
	* I0823 18:49:15.058190       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0823 18:49:15.058196       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0823 18:49:15.058203       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0823 18:49:15.058210       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0823 18:49:15.058217       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0823 18:49:15.058224       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0823 18:49:15.058229       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0823 18:49:15.058238       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0823 18:49:15.058252       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0823 18:49:15.058327       1 resource_quota_controller.go:295] "Starting resource quota controller"
	I0823 18:49:15.058332       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0823 18:49:15.058407       1 resource_quota_monitor.go:291] "QuotaMonitor running"
	I0823 18:49:15.061209       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0823 18:49:15.061290       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0823 18:49:15.061296       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0823 18:49:15.063538       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0823 18:49:15.063595       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0823 18:49:15.063613       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0823 18:49:15.072745       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0823 18:49:15.072810       1 namespace_controller.go:197] "Starting namespace controller"
	I0823 18:49:15.072813       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0823 18:49:15.134458       1 shared_informer.go:318] Caches are synced for tokens
	I0823 18:49:15.135921       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0823 18:49:15.135931       1 ttl_controller.go:124] "Starting TTL controller"
	I0823 18:49:15.135970       1 shared_informer.go:311] Waiting for caches to sync for TTL
	
	* 
	* ==> kube-scheduler [5353c4b97e29] <==
	* W0823 18:49:13.065663       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0823 18:49:13.065687       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0823 18:49:13.065711       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0823 18:49:13.065729       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0823 18:49:13.065761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0823 18:49:13.065787       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0823 18:49:13.065854       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0823 18:49:13.065879       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0823 18:49:13.879624       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0823 18:49:13.879640       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0823 18:49:13.895078       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0823 18:49:13.895096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0823 18:49:13.909686       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0823 18:49:13.909702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0823 18:49:13.913269       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0823 18:49:13.913280       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0823 18:49:13.960870       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0823 18:49:13.960886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0823 18:49:13.976498       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0823 18:49:13.976508       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0823 18:49:14.043262       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0823 18:49:14.043312       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0823 18:49:14.073850       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0823 18:49:14.073909       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0823 18:49:16.656773       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-23 18:48:59 UTC, ends at Wed 2023-08-23 18:49:19 UTC. --
	Aug 23 18:49:15 image-056000 kubelet[2342]: I0823 18:49:15.966751    2342 kubelet_node_status.go:70] "Attempting to register node" node="image-056000"
	Aug 23 18:49:15 image-056000 kubelet[2342]: I0823 18:49:15.968241    2342 topology_manager.go:215] "Topology Admit Handler" podUID="a736a1c22b402e82a9d508f2d788fa91" podNamespace="kube-system" podName="kube-apiserver-image-056000"
	Aug 23 18:49:15 image-056000 kubelet[2342]: I0823 18:49:15.968310    2342 topology_manager.go:215] "Topology Admit Handler" podUID="7f4eb3725ceec07fe4e297b0aa5f6bc1" podNamespace="kube-system" podName="kube-controller-manager-image-056000"
	Aug 23 18:49:15 image-056000 kubelet[2342]: I0823 18:49:15.968331    2342 topology_manager.go:215] "Topology Admit Handler" podUID="36480610486560ba6a994df9925579e7" podNamespace="kube-system" podName="kube-scheduler-image-056000"
	Aug 23 18:49:15 image-056000 kubelet[2342]: I0823 18:49:15.968343    2342 topology_manager.go:215] "Topology Admit Handler" podUID="9486586605851bb57b1adba60576c838" podNamespace="kube-system" podName="etcd-image-056000"
	Aug 23 18:49:15 image-056000 kubelet[2342]: I0823 18:49:15.975406    2342 kubelet_node_status.go:108] "Node was previously registered" node="image-056000"
	Aug 23 18:49:15 image-056000 kubelet[2342]: I0823 18:49:15.975442    2342 kubelet_node_status.go:73] "Successfully registered node" node="image-056000"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.065253    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/9486586605851bb57b1adba60576c838-etcd-certs\") pod \"etcd-image-056000\" (UID: \"9486586605851bb57b1adba60576c838\") " pod="kube-system/etcd-image-056000"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.065274    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a736a1c22b402e82a9d508f2d788fa91-ca-certs\") pod \"kube-apiserver-image-056000\" (UID: \"a736a1c22b402e82a9d508f2d788fa91\") " pod="kube-system/kube-apiserver-image-056000"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.065286    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7f4eb3725ceec07fe4e297b0aa5f6bc1-flexvolume-dir\") pod \"kube-controller-manager-image-056000\" (UID: \"7f4eb3725ceec07fe4e297b0aa5f6bc1\") " pod="kube-system/kube-controller-manager-image-056000"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.065294    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f4eb3725ceec07fe4e297b0aa5f6bc1-kubeconfig\") pod \"kube-controller-manager-image-056000\" (UID: \"7f4eb3725ceec07fe4e297b0aa5f6bc1\") " pod="kube-system/kube-controller-manager-image-056000"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.065305    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f4eb3725ceec07fe4e297b0aa5f6bc1-k8s-certs\") pod \"kube-controller-manager-image-056000\" (UID: \"7f4eb3725ceec07fe4e297b0aa5f6bc1\") " pod="kube-system/kube-controller-manager-image-056000"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.065319    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f4eb3725ceec07fe4e297b0aa5f6bc1-usr-share-ca-certificates\") pod \"kube-controller-manager-image-056000\" (UID: \"7f4eb3725ceec07fe4e297b0aa5f6bc1\") " pod="kube-system/kube-controller-manager-image-056000"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.065327    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/36480610486560ba6a994df9925579e7-kubeconfig\") pod \"kube-scheduler-image-056000\" (UID: \"36480610486560ba6a994df9925579e7\") " pod="kube-system/kube-scheduler-image-056000"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.065336    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/9486586605851bb57b1adba60576c838-etcd-data\") pod \"etcd-image-056000\" (UID: \"9486586605851bb57b1adba60576c838\") " pod="kube-system/etcd-image-056000"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.065344    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a736a1c22b402e82a9d508f2d788fa91-k8s-certs\") pod \"kube-apiserver-image-056000\" (UID: \"a736a1c22b402e82a9d508f2d788fa91\") " pod="kube-system/kube-apiserver-image-056000"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.065359    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a736a1c22b402e82a9d508f2d788fa91-usr-share-ca-certificates\") pod \"kube-apiserver-image-056000\" (UID: \"a736a1c22b402e82a9d508f2d788fa91\") " pod="kube-system/kube-apiserver-image-056000"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.065368    2342 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f4eb3725ceec07fe4e297b0aa5f6bc1-ca-certs\") pod \"kube-controller-manager-image-056000\" (UID: \"7f4eb3725ceec07fe4e297b0aa5f6bc1\") " pod="kube-system/kube-controller-manager-image-056000"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.852105    2342 apiserver.go:52] "Watching apiserver"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.864033    2342 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Aug 23 18:49:16 image-056000 kubelet[2342]: E0823 18:49:16.909220    2342 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-056000\" already exists" pod="kube-system/kube-apiserver-image-056000"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.921220    2342 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-056000" podStartSLOduration=1.921194633 podCreationTimestamp="2023-08-23 18:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-23 18:49:16.916394592 +0000 UTC m=+1.106158626" watchObservedRunningTime="2023-08-23 18:49:16.921194633 +0000 UTC m=+1.110958626"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.924682    2342 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-056000" podStartSLOduration=1.9246672999999999 podCreationTimestamp="2023-08-23 18:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-23 18:49:16.921340383 +0000 UTC m=+1.111104418" watchObservedRunningTime="2023-08-23 18:49:16.9246673 +0000 UTC m=+1.114431293"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.928015    2342 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-056000" podStartSLOduration=1.9279806750000001 podCreationTimestamp="2023-08-23 18:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-23 18:49:16.927938842 +0000 UTC m=+1.117702835" watchObservedRunningTime="2023-08-23 18:49:16.927980675 +0000 UTC m=+1.117744710"
	Aug 23 18:49:16 image-056000 kubelet[2342]: I0823 18:49:16.928082    2342 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-056000" podStartSLOduration=1.9280743 podCreationTimestamp="2023-08-23 18:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-23 18:49:16.924789508 +0000 UTC m=+1.114553543" watchObservedRunningTime="2023-08-23 18:49:16.9280743 +0000 UTC m=+1.117838335"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-056000 -n image-056000
helpers_test.go:261: (dbg) Run:  kubectl --context image-056000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-056000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-056000 describe pod storage-provisioner: exit status 1 (37.184291ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-056000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.09s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-537000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-537000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.617444958s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-537000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-537000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a9d5895b-8c16-4af3-ae24-dfd15f24cf38] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a9d5895b-8c16-4af3-ae24-dfd15f24cf38] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.010451s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-537000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-537000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-537000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
E0823 11:51:27.115030    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.040660458s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-537000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-537000 addons disable ingress-dns --alsologtostderr -v=1: (11.051557209s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-537000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-537000 addons disable ingress --alsologtostderr -v=1: (7.078121583s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-537000 -n ingress-addon-legacy-537000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-537000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-840000 ssh findmnt            | functional-840000           | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|                | -T /mount2                               |                             |         |         |                     |                     |
	| update-context | functional-840000                        | functional-840000           | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-840000                        | functional-840000           | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-840000                        | functional-840000           | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-840000                        | functional-840000           | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-840000                        | functional-840000           | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-840000 ssh pgrep              | functional-840000           | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-840000 image build -t         | functional-840000           | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | localhost/my-image:functional-840000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-840000 image ls               | functional-840000           | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	| image          | functional-840000                        | functional-840000           | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-840000                        | functional-840000           | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| delete         | -p functional-840000                     | functional-840000           | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:48 PDT |
	| start          | -p image-056000 --driver=qemu2           | image-056000                | jenkins | v1.31.2 | 23 Aug 23 11:48 PDT | 23 Aug 23 11:49 PDT |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-056000                | jenkins | v1.31.2 | 23 Aug 23 11:49 PDT | 23 Aug 23 11:49 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-056000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-056000                | jenkins | v1.31.2 | 23 Aug 23 11:49 PDT | 23 Aug 23 11:49 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-056000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-056000                | jenkins | v1.31.2 | 23 Aug 23 11:49 PDT | 23 Aug 23 11:49 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-056000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-056000                | jenkins | v1.31.2 | 23 Aug 23 11:49 PDT | 23 Aug 23 11:49 PDT |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-056000                          |                             |         |         |                     |                     |
	| delete         | -p image-056000                          | image-056000                | jenkins | v1.31.2 | 23 Aug 23 11:49 PDT | 23 Aug 23 11:49 PDT |
	| start          | -p ingress-addon-legacy-537000           | ingress-addon-legacy-537000 | jenkins | v1.31.2 | 23 Aug 23 11:49 PDT | 23 Aug 23 11:50 PDT |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-537000              | ingress-addon-legacy-537000 | jenkins | v1.31.2 | 23 Aug 23 11:50 PDT | 23 Aug 23 11:50 PDT |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-537000              | ingress-addon-legacy-537000 | jenkins | v1.31.2 | 23 Aug 23 11:50 PDT | 23 Aug 23 11:50 PDT |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-537000              | ingress-addon-legacy-537000 | jenkins | v1.31.2 | 23 Aug 23 11:51 PDT | 23 Aug 23 11:51 PDT |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-537000 ip           | ingress-addon-legacy-537000 | jenkins | v1.31.2 | 23 Aug 23 11:51 PDT | 23 Aug 23 11:51 PDT |
	| addons         | ingress-addon-legacy-537000              | ingress-addon-legacy-537000 | jenkins | v1.31.2 | 23 Aug 23 11:51 PDT | 23 Aug 23 11:51 PDT |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-537000              | ingress-addon-legacy-537000 | jenkins | v1.31.2 | 23 Aug 23 11:51 PDT | 23 Aug 23 11:51 PDT |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/23 11:49:19
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0823 11:49:19.959363    3160 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:49:19.959477    3160 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:49:19.959480    3160 out.go:309] Setting ErrFile to fd 2...
	I0823 11:49:19.959483    3160 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:49:19.959599    3160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:49:19.960535    3160 out.go:303] Setting JSON to false
	I0823 11:49:19.975301    3160 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2933,"bootTime":1692813626,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:49:19.975365    3160 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:49:19.979185    3160 out.go:177] * [ingress-addon-legacy-537000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:49:19.985424    3160 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:49:19.985485    3160 notify.go:220] Checking for updates...
	I0823 11:49:19.988377    3160 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:49:19.991394    3160 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:49:19.994358    3160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:49:19.997291    3160 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:49:20.000335    3160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:49:20.003506    3160 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:49:20.007341    3160 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 11:49:20.014338    3160 start.go:298] selected driver: qemu2
	I0823 11:49:20.014342    3160 start.go:902] validating driver "qemu2" against <nil>
	I0823 11:49:20.014349    3160 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:49:20.016168    3160 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 11:49:20.019255    3160 out.go:177] * Automatically selected the socket_vmnet network
	I0823 11:49:20.022398    3160 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 11:49:20.022420    3160 cni.go:84] Creating CNI manager for ""
	I0823 11:49:20.022426    3160 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0823 11:49:20.022433    3160 start_flags.go:319] config:
	{Name:ingress-addon-legacy-537000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-537000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0}
	I0823 11:49:20.026559    3160 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:49:20.029418    3160 out.go:177] * Starting control plane node ingress-addon-legacy-537000 in cluster ingress-addon-legacy-537000
	I0823 11:49:20.037341    3160 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0823 11:49:20.237399    3160 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0823 11:49:20.237496    3160 cache.go:57] Caching tarball of preloaded images
	I0823 11:49:20.238237    3160 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0823 11:49:20.243829    3160 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0823 11:49:20.248700    3160 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0823 11:49:20.475780    3160 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0823 11:49:31.790981    3160 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0823 11:49:31.791119    3160 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0823 11:49:32.538471    3160 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0823 11:49:32.538656    3160 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/config.json ...
	I0823 11:49:32.538682    3160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/config.json: {Name:mk6a1b45fd3f947c17d0264b7033a295709e58a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:32.538925    3160 start.go:365] acquiring machines lock for ingress-addon-legacy-537000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:49:32.538953    3160 start.go:369] acquired machines lock for "ingress-addon-legacy-537000" in 22.333µs
	I0823 11:49:32.538964    3160 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-537000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-537000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:49:32.539004    3160 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:49:32.545000    3160 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0823 11:49:32.559374    3160 start.go:159] libmachine.API.Create for "ingress-addon-legacy-537000" (driver="qemu2")
	I0823 11:49:32.559392    3160 client.go:168] LocalClient.Create starting
	I0823 11:49:32.559481    3160 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:49:32.559508    3160 main.go:141] libmachine: Decoding PEM data...
	I0823 11:49:32.559521    3160 main.go:141] libmachine: Parsing certificate...
	I0823 11:49:32.559558    3160 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:49:32.559576    3160 main.go:141] libmachine: Decoding PEM data...
	I0823 11:49:32.559585    3160 main.go:141] libmachine: Parsing certificate...
	I0823 11:49:32.559880    3160 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:49:32.753660    3160 main.go:141] libmachine: Creating SSH key...
	I0823 11:49:32.853815    3160 main.go:141] libmachine: Creating Disk image...
	I0823 11:49:32.853822    3160 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:49:32.853957    3160 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/ingress-addon-legacy-537000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/ingress-addon-legacy-537000/disk.qcow2
	I0823 11:49:32.862498    3160 main.go:141] libmachine: STDOUT: 
	I0823 11:49:32.862516    3160 main.go:141] libmachine: STDERR: 
	I0823 11:49:32.862581    3160 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/ingress-addon-legacy-537000/disk.qcow2 +20000M
	I0823 11:49:32.869792    3160 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:49:32.869804    3160 main.go:141] libmachine: STDERR: 
	I0823 11:49:32.869823    3160 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/ingress-addon-legacy-537000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/ingress-addon-legacy-537000/disk.qcow2
	I0823 11:49:32.869830    3160 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:49:32.869866    3160 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/ingress-addon-legacy-537000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/ingress-addon-legacy-537000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/ingress-addon-legacy-537000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:f6:82:c4:a7:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/ingress-addon-legacy-537000/disk.qcow2
	I0823 11:49:32.903852    3160 main.go:141] libmachine: STDOUT: 
	I0823 11:49:32.903894    3160 main.go:141] libmachine: STDERR: 
	I0823 11:49:32.903899    3160 main.go:141] libmachine: Attempt 0
	I0823 11:49:32.903919    3160 main.go:141] libmachine: Searching for 86:f6:82:c4:a7:63 in /var/db/dhcpd_leases ...
	I0823 11:49:32.903977    3160 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0823 11:49:32.903996    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:84:74:0:74:3 ID:1,8e:84:74:0:74:3 Lease:0x64e7a61b}
	I0823 11:49:32.904006    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:4c:61:a4:c3:43 ID:1,f2:4c:61:a4:c3:43 Lease:0x64e7a55a}
	I0823 11:49:32.904012    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:35:71:90:ad:e4 ID:1,ae:35:71:90:ad:e4 Lease:0x64e653ce}
	I0823 11:49:32.904018    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e653a3}
	I0823 11:49:34.906122    3160 main.go:141] libmachine: Attempt 1
	I0823 11:49:34.906199    3160 main.go:141] libmachine: Searching for 86:f6:82:c4:a7:63 in /var/db/dhcpd_leases ...
	I0823 11:49:34.906553    3160 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0823 11:49:34.906605    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:84:74:0:74:3 ID:1,8e:84:74:0:74:3 Lease:0x64e7a61b}
	I0823 11:49:34.906672    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:4c:61:a4:c3:43 ID:1,f2:4c:61:a4:c3:43 Lease:0x64e7a55a}
	I0823 11:49:34.906705    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:35:71:90:ad:e4 ID:1,ae:35:71:90:ad:e4 Lease:0x64e653ce}
	I0823 11:49:34.906738    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e653a3}
	I0823 11:49:36.908837    3160 main.go:141] libmachine: Attempt 2
	I0823 11:49:36.908876    3160 main.go:141] libmachine: Searching for 86:f6:82:c4:a7:63 in /var/db/dhcpd_leases ...
	I0823 11:49:36.908974    3160 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0823 11:49:36.908986    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:84:74:0:74:3 ID:1,8e:84:74:0:74:3 Lease:0x64e7a61b}
	I0823 11:49:36.908992    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:4c:61:a4:c3:43 ID:1,f2:4c:61:a4:c3:43 Lease:0x64e7a55a}
	I0823 11:49:36.908997    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:35:71:90:ad:e4 ID:1,ae:35:71:90:ad:e4 Lease:0x64e653ce}
	I0823 11:49:36.909001    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e653a3}
	I0823 11:49:38.910966    3160 main.go:141] libmachine: Attempt 3
	I0823 11:49:38.910976    3160 main.go:141] libmachine: Searching for 86:f6:82:c4:a7:63 in /var/db/dhcpd_leases ...
	I0823 11:49:38.911015    3160 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0823 11:49:38.911022    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:84:74:0:74:3 ID:1,8e:84:74:0:74:3 Lease:0x64e7a61b}
	I0823 11:49:38.911028    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:4c:61:a4:c3:43 ID:1,f2:4c:61:a4:c3:43 Lease:0x64e7a55a}
	I0823 11:49:38.911043    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:35:71:90:ad:e4 ID:1,ae:35:71:90:ad:e4 Lease:0x64e653ce}
	I0823 11:49:38.911048    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e653a3}
	I0823 11:49:40.913082    3160 main.go:141] libmachine: Attempt 4
	I0823 11:49:40.913116    3160 main.go:141] libmachine: Searching for 86:f6:82:c4:a7:63 in /var/db/dhcpd_leases ...
	I0823 11:49:40.913179    3160 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0823 11:49:40.913185    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:84:74:0:74:3 ID:1,8e:84:74:0:74:3 Lease:0x64e7a61b}
	I0823 11:49:40.913194    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:4c:61:a4:c3:43 ID:1,f2:4c:61:a4:c3:43 Lease:0x64e7a55a}
	I0823 11:49:40.913200    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:35:71:90:ad:e4 ID:1,ae:35:71:90:ad:e4 Lease:0x64e653ce}
	I0823 11:49:40.913207    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e653a3}
	I0823 11:49:42.915196    3160 main.go:141] libmachine: Attempt 5
	I0823 11:49:42.915211    3160 main.go:141] libmachine: Searching for 86:f6:82:c4:a7:63 in /var/db/dhcpd_leases ...
	I0823 11:49:42.915278    3160 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0823 11:49:42.915286    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:84:74:0:74:3 ID:1,8e:84:74:0:74:3 Lease:0x64e7a61b}
	I0823 11:49:42.915291    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f2:4c:61:a4:c3:43 ID:1,f2:4c:61:a4:c3:43 Lease:0x64e7a55a}
	I0823 11:49:42.915297    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:35:71:90:ad:e4 ID:1,ae:35:71:90:ad:e4 Lease:0x64e653ce}
	I0823 11:49:42.915303    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:da:3b:71:83:24:7a ID:1,da:3b:71:83:24:7a Lease:0x64e653a3}
	I0823 11:49:44.917304    3160 main.go:141] libmachine: Attempt 6
	I0823 11:49:44.917350    3160 main.go:141] libmachine: Searching for 86:f6:82:c4:a7:63 in /var/db/dhcpd_leases ...
	I0823 11:49:44.917497    3160 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0823 11:49:44.917529    3160 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:86:f6:82:c4:a7:63 ID:1,86:f6:82:c4:a7:63 Lease:0x64e7a647}
	I0823 11:49:44.917534    3160 main.go:141] libmachine: Found match: 86:f6:82:c4:a7:63
	I0823 11:49:44.917546    3160 main.go:141] libmachine: IP: 192.168.105.6
	I0823 11:49:44.917553    3160 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0823 11:49:45.924249    3160 machine.go:88] provisioning docker machine ...
	I0823 11:49:45.924271    3160 buildroot.go:166] provisioning hostname "ingress-addon-legacy-537000"
	I0823 11:49:45.924307    3160 main.go:141] libmachine: Using SSH client type: native
	I0823 11:49:45.924575    3160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10058a1e0] 0x10058cc40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0823 11:49:45.924586    3160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-537000 && echo "ingress-addon-legacy-537000" | sudo tee /etc/hostname
	I0823 11:49:45.987072    3160 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-537000
	
	I0823 11:49:45.987136    3160 main.go:141] libmachine: Using SSH client type: native
	I0823 11:49:45.987381    3160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10058a1e0] 0x10058cc40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0823 11:49:45.987391    3160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-537000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-537000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-537000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0823 11:49:46.048884    3160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0823 11:49:46.048897    3160 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17086-1133/.minikube CaCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17086-1133/.minikube}
	I0823 11:49:46.048904    3160 buildroot.go:174] setting up certificates
	I0823 11:49:46.048912    3160 provision.go:83] configureAuth start
	I0823 11:49:46.048916    3160 provision.go:138] copyHostCerts
	I0823 11:49:46.048943    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17086-1133/.minikube/key.pem
	I0823 11:49:46.048993    3160 exec_runner.go:144] found /Users/jenkins/minikube-integration/17086-1133/.minikube/key.pem, removing ...
	I0823 11:49:46.048999    3160 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17086-1133/.minikube/key.pem
	I0823 11:49:46.049125    3160 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/key.pem (1679 bytes)
	I0823 11:49:46.049288    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.pem
	I0823 11:49:46.049315    3160 exec_runner.go:144] found /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.pem, removing ...
	I0823 11:49:46.049317    3160 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.pem
	I0823 11:49:46.049372    3160 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.pem (1078 bytes)
	I0823 11:49:46.049452    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cert.pem
	I0823 11:49:46.049471    3160 exec_runner.go:144] found /Users/jenkins/minikube-integration/17086-1133/.minikube/cert.pem, removing ...
	I0823 11:49:46.049473    3160 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17086-1133/.minikube/cert.pem
	I0823 11:49:46.049545    3160 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17086-1133/.minikube/cert.pem (1123 bytes)
	I0823 11:49:46.049631    3160 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-537000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-537000]
	I0823 11:49:46.090210    3160 provision.go:172] copyRemoteCerts
	I0823 11:49:46.090249    3160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0823 11:49:46.090256    3160 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/ingress-addon-legacy-537000/id_rsa Username:docker}
	I0823 11:49:46.125035    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0823 11:49:46.125083    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0823 11:49:46.131894    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0823 11:49:46.131933    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0823 11:49:46.138411    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0823 11:49:46.138458    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0823 11:49:46.145358    3160 provision.go:86] duration metric: configureAuth took 96.440458ms
	I0823 11:49:46.145366    3160 buildroot.go:189] setting minikube options for container-runtime
	I0823 11:49:46.145468    3160 config.go:182] Loaded profile config "ingress-addon-legacy-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0823 11:49:46.145503    3160 main.go:141] libmachine: Using SSH client type: native
	I0823 11:49:46.145718    3160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10058a1e0] 0x10058cc40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0823 11:49:46.145723    3160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0823 11:49:46.207502    3160 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0823 11:49:46.207509    3160 buildroot.go:70] root file system type: tmpfs
	I0823 11:49:46.207580    3160 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0823 11:49:46.207631    3160 main.go:141] libmachine: Using SSH client type: native
	I0823 11:49:46.207876    3160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10058a1e0] 0x10058cc40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0823 11:49:46.207915    3160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0823 11:49:46.274672    3160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0823 11:49:46.274728    3160 main.go:141] libmachine: Using SSH client type: native
	I0823 11:49:46.274979    3160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10058a1e0] 0x10058cc40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0823 11:49:46.274988    3160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0823 11:49:46.606059    3160 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0823 11:49:46.606073    3160 machine.go:91] provisioned docker machine in 681.836584ms
	I0823 11:49:46.606078    3160 client.go:171] LocalClient.Create took 14.047153625s
	I0823 11:49:46.606093    3160 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-537000" took 14.047192417s
	I0823 11:49:46.606110    3160 start.go:300] post-start starting for "ingress-addon-legacy-537000" (driver="qemu2")
	I0823 11:49:46.606118    3160 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0823 11:49:46.606186    3160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0823 11:49:46.606200    3160 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/ingress-addon-legacy-537000/id_rsa Username:docker}
	I0823 11:49:46.637538    3160 ssh_runner.go:195] Run: cat /etc/os-release
	I0823 11:49:46.638994    3160 info.go:137] Remote host: Buildroot 2021.02.12
	I0823 11:49:46.639001    3160 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/addons for local assets ...
	I0823 11:49:46.639073    3160 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17086-1133/.minikube/files for local assets ...
	I0823 11:49:46.639179    3160 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0823 11:49:46.639184    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/15542.pem -> /etc/ssl/certs/15542.pem
	I0823 11:49:46.639302    3160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0823 11:49:46.641880    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0823 11:49:46.649306    3160 start.go:303] post-start completed in 43.190458ms
	I0823 11:49:46.649980    3160 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/config.json ...
	I0823 11:49:46.650158    3160 start.go:128] duration metric: createHost completed in 14.111623916s
	I0823 11:49:46.650190    3160 main.go:141] libmachine: Using SSH client type: native
	I0823 11:49:46.650408    3160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10058a1e0] 0x10058cc40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0823 11:49:46.650413    3160 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0823 11:49:46.710783    3160 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692816586.414969335
	
	I0823 11:49:46.710791    3160 fix.go:206] guest clock: 1692816586.414969335
	I0823 11:49:46.710796    3160 fix.go:219] Guest: 2023-08-23 11:49:46.414969335 -0700 PDT Remote: 2023-08-23 11:49:46.650161 -0700 PDT m=+26.710497751 (delta=-235.191665ms)
	I0823 11:49:46.710809    3160 fix.go:190] guest clock delta is within tolerance: -235.191665ms
	I0823 11:49:46.710822    3160 start.go:83] releasing machines lock for "ingress-addon-legacy-537000", held for 14.172340458s
	I0823 11:49:46.711113    3160 ssh_runner.go:195] Run: cat /version.json
	I0823 11:49:46.711117    3160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0823 11:49:46.711122    3160 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/ingress-addon-legacy-537000/id_rsa Username:docker}
	I0823 11:49:46.711144    3160 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/ingress-addon-legacy-537000/id_rsa Username:docker}
	I0823 11:49:46.783899    3160 ssh_runner.go:195] Run: systemctl --version
	I0823 11:49:46.786112    3160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0823 11:49:46.788145    3160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0823 11:49:46.788178    3160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0823 11:49:46.791712    3160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0823 11:49:46.796749    3160 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0823 11:49:46.796762    3160 start.go:466] detecting cgroup driver to use...
	I0823 11:49:46.796828    3160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:49:46.804418    3160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0823 11:49:46.807486    3160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0823 11:49:46.810327    3160 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0823 11:49:46.810351    3160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0823 11:49:46.813571    3160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:49:46.817099    3160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0823 11:49:46.820300    3160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0823 11:49:46.823141    3160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0823 11:49:46.825937    3160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0823 11:49:46.830007    3160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0823 11:49:46.833300    3160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0823 11:49:46.836643    3160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:49:46.915205    3160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0823 11:49:46.923603    3160 start.go:466] detecting cgroup driver to use...
	I0823 11:49:46.923674    3160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0823 11:49:46.929512    3160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:49:46.934334    3160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0823 11:49:46.940441    3160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0823 11:49:46.945073    3160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 11:49:46.950214    3160 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0823 11:49:46.989312    3160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0823 11:49:46.994817    3160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0823 11:49:47.000083    3160 ssh_runner.go:195] Run: which cri-dockerd
	I0823 11:49:47.001454    3160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0823 11:49:47.004360    3160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0823 11:49:47.009284    3160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0823 11:49:47.087908    3160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0823 11:49:47.172007    3160 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0823 11:49:47.172031    3160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0823 11:49:47.177453    3160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:49:47.256754    3160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0823 11:49:48.421250    3160 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.164518709s)
	I0823 11:49:48.421326    3160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:49:48.439892    3160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0823 11:49:48.461181    3160 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.4 ...
	I0823 11:49:48.461258    3160 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0823 11:49:48.462488    3160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 11:49:48.466028    3160 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0823 11:49:48.466077    3160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:49:48.471280    3160 docker.go:636] Got preloaded images: 
	I0823 11:49:48.471287    3160 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0823 11:49:48.471328    3160 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0823 11:49:48.474126    3160 ssh_runner.go:195] Run: which lz4
	I0823 11:49:48.475523    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0823 11:49:48.475620    3160 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0823 11:49:48.476926    3160 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0823 11:49:48.476939    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0823 11:49:50.234820    3160 docker.go:600] Took 1.759301 seconds to copy over tarball
	I0823 11:49:50.234877    3160 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0823 11:49:51.540397    3160 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.305550292s)
	I0823 11:49:51.540408    3160 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0823 11:49:51.563794    3160 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0823 11:49:51.567854    3160 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0823 11:49:51.574008    3160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0823 11:49:51.651823    3160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0823 11:49:53.213997    3160 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.562213125s)
	I0823 11:49:53.214082    3160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0823 11:49:53.220039    3160 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0823 11:49:53.220051    3160 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0823 11:49:53.220054    3160 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0823 11:49:53.261456    3160 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0823 11:49:53.261586    3160 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0823 11:49:53.261879    3160 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0823 11:49:53.261928    3160 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 11:49:53.261926    3160 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0823 11:49:53.262839    3160 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0823 11:49:53.263318    3160 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0823 11:49:53.263381    3160 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0823 11:49:53.268053    3160 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0823 11:49:53.268277    3160 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 11:49:53.268515    3160 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0823 11:49:53.269404    3160 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0823 11:49:53.269440    3160 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0823 11:49:53.269479    3160 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0823 11:49:53.269661    3160 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0823 11:49:53.269672    3160 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0823 11:49:53.797786    3160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0823 11:49:53.804203    3160 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0823 11:49:53.804240    3160 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0823 11:49:53.804272    3160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0823 11:49:53.815383    3160 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0823 11:49:54.081436    3160 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0823 11:49:54.081594    3160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0823 11:49:54.087345    3160 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0823 11:49:54.087371    3160 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0823 11:49:54.087417    3160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0823 11:49:54.093800    3160 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W0823 11:49:54.298793    3160 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0823 11:49:54.298911    3160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0823 11:49:54.305656    3160 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0823 11:49:54.305679    3160 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0823 11:49:54.305726    3160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0823 11:49:54.311585    3160 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0823 11:49:54.500350    3160 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0823 11:49:54.500473    3160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 11:49:54.506715    3160 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0823 11:49:54.506739    3160 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 11:49:54.506787    3160 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W0823 11:49:54.509501    3160 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0823 11:49:54.509589    3160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0823 11:49:54.522195    3160 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0823 11:49:54.522270    3160 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0823 11:49:54.522291    3160 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0823 11:49:54.522334    3160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0823 11:49:54.528127    3160 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0823 11:49:54.726603    3160 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0823 11:49:54.726750    3160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0823 11:49:54.735389    3160 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0823 11:49:54.735417    3160 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0823 11:49:54.735462    3160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0823 11:49:54.741574    3160 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0823 11:49:54.945106    3160 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0823 11:49:54.945235    3160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0823 11:49:54.951620    3160 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0823 11:49:54.951650    3160 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0823 11:49:54.951710    3160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0823 11:49:54.957612    3160 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0823 11:49:55.157888    3160 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0823 11:49:55.158396    3160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0823 11:49:55.178474    3160 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0823 11:49:55.178533    3160 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0823 11:49:55.178646    3160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0823 11:49:55.191646    3160 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0823 11:49:55.191713    3160 cache_images.go:92] LoadImages completed in 1.971717833s
	W0823 11:49:55.191807    3160 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I0823 11:49:55.191880    3160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0823 11:49:55.205463    3160 cni.go:84] Creating CNI manager for ""
	I0823 11:49:55.205483    3160 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0823 11:49:55.205533    3160 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0823 11:49:55.205547    3160 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-537000 NodeName:ingress-addon-legacy-537000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0823 11:49:55.205673    3160 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-537000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0823 11:49:55.205737    3160 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-537000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-537000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0823 11:49:55.205805    3160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0823 11:49:55.210517    3160 binaries.go:44] Found k8s binaries, skipping transfer
	I0823 11:49:55.210571    3160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0823 11:49:55.214256    3160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0823 11:49:55.220794    3160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0823 11:49:55.226387    3160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0823 11:49:55.232093    3160 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0823 11:49:55.233293    3160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0823 11:49:55.237263    3160 certs.go:56] Setting up /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000 for IP: 192.168.105.6
	I0823 11:49:55.237272    3160 certs.go:190] acquiring lock for shared ca certs: {Name:mk376b7368cd0c31aa9f4550a2157cb5219e493f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:55.237420    3160 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key
	I0823 11:49:55.237459    3160 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key
	I0823 11:49:55.237484    3160 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.key
	I0823 11:49:55.237492    3160 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt with IP's: []
	I0823 11:49:55.355749    3160 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt ...
	I0823 11:49:55.355754    3160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: {Name:mk4b5c5d261118e26d08fe79c0287231509e6a2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:55.356015    3160 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.key ...
	I0823 11:49:55.356019    3160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.key: {Name:mkb4c0113963401f784d52f02d2461e39d1ad7f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:55.356145    3160 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/apiserver.key.b354f644
	I0823 11:49:55.356154    3160 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0823 11:49:55.503880    3160 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/apiserver.crt.b354f644 ...
	I0823 11:49:55.503885    3160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/apiserver.crt.b354f644: {Name:mkf2dc619430a1d9588bc29e6bfda4af39076340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:55.504028    3160 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/apiserver.key.b354f644 ...
	I0823 11:49:55.504031    3160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/apiserver.key.b354f644: {Name:mkabb0303415cf5bd73fd997a5ee82aff20b24f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:55.504138    3160 certs.go:337] copying /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/apiserver.crt
	I0823 11:49:55.504372    3160 certs.go:341] copying /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/apiserver.key
	I0823 11:49:55.504517    3160 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/proxy-client.key
	I0823 11:49:55.504527    3160 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/proxy-client.crt with IP's: []
	I0823 11:49:55.711055    3160 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/proxy-client.crt ...
	I0823 11:49:55.711064    3160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/proxy-client.crt: {Name:mkaf2ba57dac38d457cd6d1602e36cecc3f28e60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:55.711259    3160 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/proxy-client.key ...
	I0823 11:49:55.711263    3160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/proxy-client.key: {Name:mk2caddb22b8e5d80f3e8ba8b362ab1243cfd8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:49:55.711385    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0823 11:49:55.711406    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0823 11:49:55.711419    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0823 11:49:55.711433    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0823 11:49:55.711446    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0823 11:49:55.711460    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0823 11:49:55.711471    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0823 11:49:55.711488    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0823 11:49:55.711594    3160 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/1554.pem (1338 bytes)
	W0823 11:49:55.711636    3160 certs.go:433] ignoring /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/1554_empty.pem, impossibly tiny 0 bytes
	I0823 11:49:55.711644    3160 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca-key.pem (1675 bytes)
	I0823 11:49:55.711674    3160 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem (1078 bytes)
	I0823 11:49:55.711698    3160 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem (1123 bytes)
	I0823 11:49:55.711735    3160 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/certs/key.pem (1679 bytes)
	I0823 11:49:55.711789    3160 certs.go:437] found cert: /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/15542.pem (1708 bytes)
	I0823 11:49:55.711819    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/1554.pem -> /usr/share/ca-certificates/1554.pem
	I0823 11:49:55.711831    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/15542.pem -> /usr/share/ca-certificates/15542.pem
	I0823 11:49:55.711842    3160 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:49:55.712265    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0823 11:49:55.720316    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0823 11:49:55.727055    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0823 11:49:55.733528    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0823 11:49:55.740564    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0823 11:49:55.747876    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0823 11:49:55.754646    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0823 11:49:55.761388    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0823 11:49:55.768660    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/1554.pem --> /usr/share/ca-certificates/1554.pem (1338 bytes)
	I0823 11:49:55.775741    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/ssl/certs/15542.pem --> /usr/share/ca-certificates/15542.pem (1708 bytes)
	I0823 11:49:55.782631    3160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0823 11:49:55.789335    3160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0823 11:49:55.794514    3160 ssh_runner.go:195] Run: openssl version
	I0823 11:49:55.796329    3160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554.pem && ln -fs /usr/share/ca-certificates/1554.pem /etc/ssl/certs/1554.pem"
	I0823 11:49:55.799172    3160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1554.pem
	I0823 11:49:55.800575    3160 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 23 18:45 /usr/share/ca-certificates/1554.pem
	I0823 11:49:55.800595    3160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554.pem
	I0823 11:49:55.802518    3160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554.pem /etc/ssl/certs/51391683.0"
	I0823 11:49:55.805522    3160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15542.pem && ln -fs /usr/share/ca-certificates/15542.pem /etc/ssl/certs/15542.pem"
	I0823 11:49:55.808832    3160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15542.pem
	I0823 11:49:55.810291    3160 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 23 18:45 /usr/share/ca-certificates/15542.pem
	I0823 11:49:55.810313    3160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15542.pem
	I0823 11:49:55.812063    3160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15542.pem /etc/ssl/certs/3ec20f2e.0"
	I0823 11:49:55.815042    3160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0823 11:49:55.817824    3160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:49:55.819144    3160 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 23 18:12 /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:49:55.819166    3160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0823 11:49:55.820968    3160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0823 11:49:55.824162    3160 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0823 11:49:55.825374    3160 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0823 11:49:55.825406    3160 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-537000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.1
8.20 ClusterName:ingress-addon-legacy-537000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:49:55.825468    3160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0823 11:49:55.830799    3160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0823 11:49:55.833646    3160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0823 11:49:55.836816    3160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0823 11:49:55.839746    3160 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0823 11:49:55.839766    3160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0823 11:49:55.864865    3160 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0823 11:49:55.864941    3160 kubeadm.go:322] [preflight] Running pre-flight checks
	I0823 11:49:55.949942    3160 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0823 11:49:55.949990    3160 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0823 11:49:55.950042    3160 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0823 11:49:55.997036    3160 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0823 11:49:55.997502    3160 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0823 11:49:55.997538    3160 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0823 11:49:56.082713    3160 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0823 11:49:56.092980    3160 out.go:204]   - Generating certificates and keys ...
	I0823 11:49:56.093018    3160 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0823 11:49:56.093054    3160 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0823 11:49:56.201088    3160 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0823 11:49:56.334700    3160 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0823 11:49:56.369195    3160 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0823 11:49:56.460122    3160 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0823 11:49:56.553427    3160 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0823 11:49:56.553511    3160 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-537000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0823 11:49:56.758029    3160 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0823 11:49:56.758093    3160 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-537000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0823 11:49:56.877744    3160 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0823 11:49:56.967455    3160 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0823 11:49:57.167227    3160 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0823 11:49:57.167279    3160 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0823 11:49:57.350069    3160 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0823 11:49:57.388366    3160 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0823 11:49:57.450214    3160 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0823 11:49:57.491778    3160 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0823 11:49:57.491957    3160 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0823 11:49:57.499244    3160 out.go:204]   - Booting up control plane ...
	I0823 11:49:57.499300    3160 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0823 11:49:57.499353    3160 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0823 11:49:57.499407    3160 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0823 11:49:57.499475    3160 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0823 11:49:57.499547    3160 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0823 11:50:09.499318    3160 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.001324 seconds
	I0823 11:50:09.499454    3160 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0823 11:50:09.509576    3160 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0823 11:50:10.041000    3160 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0823 11:50:10.041196    3160 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-537000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0823 11:50:10.552573    3160 kubeadm.go:322] [bootstrap-token] Using token: z366xw.txxubchct6hzen4v
	I0823 11:50:10.558129    3160 out.go:204]   - Configuring RBAC rules ...
	I0823 11:50:10.558235    3160 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0823 11:50:10.558349    3160 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0823 11:50:10.565480    3160 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0823 11:50:10.566700    3160 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0823 11:50:10.568210    3160 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0823 11:50:10.569362    3160 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0823 11:50:10.573548    3160 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0823 11:50:10.745012    3160 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0823 11:50:10.966182    3160 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0823 11:50:10.967491    3160 kubeadm.go:322] 
	I0823 11:50:10.967541    3160 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0823 11:50:10.967561    3160 kubeadm.go:322] 
	I0823 11:50:10.967614    3160 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0823 11:50:10.967618    3160 kubeadm.go:322] 
	I0823 11:50:10.967634    3160 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0823 11:50:10.967696    3160 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0823 11:50:10.967729    3160 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0823 11:50:10.967733    3160 kubeadm.go:322] 
	I0823 11:50:10.967765    3160 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0823 11:50:10.967823    3160 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0823 11:50:10.967878    3160 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0823 11:50:10.967882    3160 kubeadm.go:322] 
	I0823 11:50:10.967991    3160 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0823 11:50:10.968042    3160 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0823 11:50:10.968054    3160 kubeadm.go:322] 
	I0823 11:50:10.968104    3160 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token z366xw.txxubchct6hzen4v \
	I0823 11:50:10.968186    3160 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:25bfb9bdd14f4bb2775ce7c1686c5d9f5d740ad24201ba0b194f17c1c3700b6f \
	I0823 11:50:10.968216    3160 kubeadm.go:322]     --control-plane 
	I0823 11:50:10.968222    3160 kubeadm.go:322] 
	I0823 11:50:10.968283    3160 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0823 11:50:10.968289    3160 kubeadm.go:322] 
	I0823 11:50:10.968372    3160 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token z366xw.txxubchct6hzen4v \
	I0823 11:50:10.968467    3160 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:25bfb9bdd14f4bb2775ce7c1686c5d9f5d740ad24201ba0b194f17c1c3700b6f 
	I0823 11:50:10.969519    3160 kubeadm.go:322] W0823 18:49:55.569190    1416 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0823 11:50:10.969671    3160 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0823 11:50:10.969772    3160 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
	I0823 11:50:10.969862    3160 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0823 11:50:10.969952    3160 kubeadm.go:322] W0823 18:49:57.200886    1416 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0823 11:50:10.970063    3160 kubeadm.go:322] W0823 18:49:57.201297    1416 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0823 11:50:10.970074    3160 cni.go:84] Creating CNI manager for ""
	I0823 11:50:10.970083    3160 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0823 11:50:10.970098    3160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0823 11:50:10.970189    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:10.970193    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=b0cad40b2c48c073112e0845fd1037a030ab0128 minikube.k8s.io/name=ingress-addon-legacy-537000 minikube.k8s.io/updated_at=2023_08_23T11_50_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:10.976905    3160 ops.go:34] apiserver oom_adj: -16
	I0823 11:50:11.070570    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:11.106954    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:11.641924    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:12.142018    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:12.641916    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:13.141935    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:13.641921    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:14.141880    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:14.641942    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:15.141795    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:15.641793    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:16.141834    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:16.640107    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:17.141558    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:17.641490    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:18.141578    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:18.641719    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:19.141746    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:19.641667    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:20.141623    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:20.640824    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:21.141413    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:21.641614    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:22.141627    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:22.641548    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:23.141373    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:23.641664    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:24.141426    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:24.641527    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:25.141290    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:25.641298    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:26.141500    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:26.641478    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:27.141206    3160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0823 11:50:27.196693    3160 kubeadm.go:1081] duration metric: took 16.2271365s to wait for elevateKubeSystemPrivileges.
	I0823 11:50:27.196709    3160 kubeadm.go:406] StartCluster complete in 31.37237025s
	I0823 11:50:27.196719    3160 settings.go:142] acquiring lock: {Name:mk727aa2e008442bafb3bca409aa11c730e46bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:50:27.196843    3160 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:50:27.197240    3160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/kubeconfig: {Name:mk62eedfc40b21b153318958a231856d07330e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:50:27.197440    3160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0823 11:50:27.197558    3160 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0823 11:50:27.197594    3160 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-537000"
	I0823 11:50:27.197601    3160 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-537000"
	I0823 11:50:27.197628    3160 host.go:66] Checking if "ingress-addon-legacy-537000" exists ...
	I0823 11:50:27.197625    3160 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-537000"
	I0823 11:50:27.197697    3160 config.go:182] Loaded profile config "ingress-addon-legacy-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0823 11:50:27.197728    3160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-537000"
	I0823 11:50:27.197889    3160 kapi.go:59] client config for ingress-addon-legacy-537000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.key", CAFile:"/Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10193b720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0823 11:50:27.198034    3160 host.go:54] host status for "ingress-addon-legacy-537000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/ingress-addon-legacy-537000/monitor: connect: connection refused
	W0823 11:50:27.198045    3160 addons.go:277] "ingress-addon-legacy-537000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0823 11:50:27.198366    3160 cert_rotation.go:137] Starting client certificate rotation controller
	I0823 11:50:27.198924    3160 kapi.go:59] client config for ingress-addon-legacy-537000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.key", CAFile:"/Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10193b720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0823 11:50:27.209188    3160 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-537000"
	I0823 11:50:27.209207    3160 host.go:66] Checking if "ingress-addon-legacy-537000" exists ...
	I0823 11:50:27.209916    3160 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0823 11:50:27.209923    3160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0823 11:50:27.209929    3160 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/ingress-addon-legacy-537000/id_rsa Username:docker}
	I0823 11:50:27.216237    3160 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-537000" context rescaled to 1 replicas
	I0823 11:50:27.216255    3160 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:50:27.220396    3160 out.go:177] * Verifying Kubernetes components...
	I0823 11:50:27.226490    3160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 11:50:27.270406    3160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0823 11:50:27.296328    3160 kapi.go:59] client config for ingress-addon-legacy-537000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.key", CAFile:"/Users/jenkins/minikube-integration/17086-1133/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10193b720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0823 11:50:27.296480    3160 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-537000" to be "Ready" ...
	I0823 11:50:27.298019    3160 node_ready.go:49] node "ingress-addon-legacy-537000" has status "Ready":"True"
	I0823 11:50:27.298024    3160 node_ready.go:38] duration metric: took 1.537833ms waiting for node "ingress-addon-legacy-537000" to be "Ready" ...
	I0823 11:50:27.298028    3160 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:50:27.300319    3160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0823 11:50:27.301248    3160 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-lf2vf" in "kube-system" namespace to be "Ready" ...
	I0823 11:50:27.503267    3160 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0823 11:50:27.513211    3160 addons.go:502] enable addons completed in 315.696042ms: enabled=[storage-provisioner default-storageclass]
	I0823 11:50:27.514062    3160 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0823 11:50:29.312667    3160 pod_ready.go:102] pod "coredns-66bff467f8-lf2vf" in "kube-system" namespace has status "Ready":"False"
	I0823 11:50:31.317598    3160 pod_ready.go:102] pod "coredns-66bff467f8-lf2vf" in "kube-system" namespace has status "Ready":"False"
	I0823 11:50:33.816801    3160 pod_ready.go:102] pod "coredns-66bff467f8-lf2vf" in "kube-system" namespace has status "Ready":"False"
	I0823 11:50:35.817327    3160 pod_ready.go:102] pod "coredns-66bff467f8-lf2vf" in "kube-system" namespace has status "Ready":"False"
	I0823 11:50:36.809485    3160 pod_ready.go:92] pod "coredns-66bff467f8-lf2vf" in "kube-system" namespace has status "Ready":"True"
	I0823 11:50:36.809501    3160 pod_ready.go:81] duration metric: took 9.508563209s waiting for pod "coredns-66bff467f8-lf2vf" in "kube-system" namespace to be "Ready" ...
	I0823 11:50:36.809509    3160 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-537000" in "kube-system" namespace to be "Ready" ...
	I0823 11:50:36.812418    3160 pod_ready.go:92] pod "etcd-ingress-addon-legacy-537000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:50:36.812429    3160 pod_ready.go:81] duration metric: took 2.915459ms waiting for pod "etcd-ingress-addon-legacy-537000" in "kube-system" namespace to be "Ready" ...
	I0823 11:50:36.812436    3160 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-537000" in "kube-system" namespace to be "Ready" ...
	I0823 11:50:36.815575    3160 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-537000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:50:36.815584    3160 pod_ready.go:81] duration metric: took 3.143917ms waiting for pod "kube-apiserver-ingress-addon-legacy-537000" in "kube-system" namespace to be "Ready" ...
	I0823 11:50:36.815590    3160 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-537000" in "kube-system" namespace to be "Ready" ...
	I0823 11:50:36.818221    3160 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-537000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:50:36.818226    3160 pod_ready.go:81] duration metric: took 2.630042ms waiting for pod "kube-controller-manager-ingress-addon-legacy-537000" in "kube-system" namespace to be "Ready" ...
	I0823 11:50:36.818230    3160 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kdfcl" in "kube-system" namespace to be "Ready" ...
	I0823 11:50:36.820684    3160 pod_ready.go:92] pod "kube-proxy-kdfcl" in "kube-system" namespace has status "Ready":"True"
	I0823 11:50:36.820691    3160 pod_ready.go:81] duration metric: took 2.45825ms waiting for pod "kube-proxy-kdfcl" in "kube-system" namespace to be "Ready" ...
	I0823 11:50:36.820696    3160 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-537000" in "kube-system" namespace to be "Ready" ...
	I0823 11:50:37.007299    3160 request.go:629] Waited for 186.5255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-537000
	I0823 11:50:37.206544    3160 request.go:629] Waited for 196.089959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-537000
	I0823 11:50:37.213457    3160 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-537000" in "kube-system" namespace has status "Ready":"True"
	I0823 11:50:37.213489    3160 pod_ready.go:81] duration metric: took 392.795375ms waiting for pod "kube-scheduler-ingress-addon-legacy-537000" in "kube-system" namespace to be "Ready" ...
	I0823 11:50:37.213509    3160 pod_ready.go:38] duration metric: took 9.915805959s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0823 11:50:37.213571    3160 api_server.go:52] waiting for apiserver process to appear ...
	I0823 11:50:37.213882    3160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0823 11:50:37.231182    3160 api_server.go:72] duration metric: took 10.015239041s to wait for apiserver process to appear ...
	I0823 11:50:37.231206    3160 api_server.go:88] waiting for apiserver healthz status ...
	I0823 11:50:37.231228    3160 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0823 11:50:37.241213    3160 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0823 11:50:37.242404    3160 api_server.go:141] control plane version: v1.18.20
	I0823 11:50:37.242420    3160 api_server.go:131] duration metric: took 11.208417ms to wait for apiserver health ...
	I0823 11:50:37.242429    3160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0823 11:50:37.407253    3160 request.go:629] Waited for 164.752291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0823 11:50:37.421243    3160 system_pods.go:59] 6 kube-system pods found
	I0823 11:50:37.421278    3160 system_pods.go:61] "coredns-66bff467f8-lf2vf" [8978b6e9-551e-4bee-9987-009b4501381b] Running
	I0823 11:50:37.421291    3160 system_pods.go:61] "etcd-ingress-addon-legacy-537000" [de12cee7-38c5-488d-95cd-200ad71ae67a] Running
	I0823 11:50:37.421300    3160 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-537000" [881dd84a-717a-45d6-bfda-f2cd619fc554] Running
	I0823 11:50:37.421311    3160 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-537000" [9b4996fb-7bcb-4c4b-a4f7-fc749aa1a047] Running
	I0823 11:50:37.421324    3160 system_pods.go:61] "kube-proxy-kdfcl" [4b8bba09-c084-4333-b05a-79ec8240c2d6] Running
	I0823 11:50:37.421333    3160 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-537000" [641041cc-8b56-4ab9-be6a-705158dce318] Running
	I0823 11:50:37.421342    3160 system_pods.go:74] duration metric: took 178.909958ms to wait for pod list to return data ...
	I0823 11:50:37.421358    3160 default_sa.go:34] waiting for default service account to be created ...
	I0823 11:50:37.607219    3160 request.go:629] Waited for 185.75575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0823 11:50:37.612974    3160 default_sa.go:45] found service account: "default"
	I0823 11:50:37.613019    3160 default_sa.go:55] duration metric: took 191.638667ms for default service account to be created ...
	I0823 11:50:37.613039    3160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0823 11:50:37.807214    3160 request.go:629] Waited for 194.088667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0823 11:50:37.819513    3160 system_pods.go:86] 6 kube-system pods found
	I0823 11:50:37.819545    3160 system_pods.go:89] "coredns-66bff467f8-lf2vf" [8978b6e9-551e-4bee-9987-009b4501381b] Running
	I0823 11:50:37.819557    3160 system_pods.go:89] "etcd-ingress-addon-legacy-537000" [de12cee7-38c5-488d-95cd-200ad71ae67a] Running
	I0823 11:50:37.819583    3160 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-537000" [881dd84a-717a-45d6-bfda-f2cd619fc554] Running
	I0823 11:50:37.819597    3160 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-537000" [9b4996fb-7bcb-4c4b-a4f7-fc749aa1a047] Running
	I0823 11:50:37.819609    3160 system_pods.go:89] "kube-proxy-kdfcl" [4b8bba09-c084-4333-b05a-79ec8240c2d6] Running
	I0823 11:50:37.819619    3160 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-537000" [641041cc-8b56-4ab9-be6a-705158dce318] Running
	I0823 11:50:37.819635    3160 system_pods.go:126] duration metric: took 206.582459ms to wait for k8s-apps to be running ...
	I0823 11:50:37.819653    3160 system_svc.go:44] waiting for kubelet service to be running ....
	I0823 11:50:37.819862    3160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0823 11:50:37.835526    3160 system_svc.go:56] duration metric: took 15.870291ms WaitForService to wait for kubelet.
	I0823 11:50:37.835546    3160 kubeadm.go:581] duration metric: took 10.619630666s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0823 11:50:37.835569    3160 node_conditions.go:102] verifying NodePressure condition ...
	I0823 11:50:38.007230    3160 request.go:629] Waited for 171.594625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0823 11:50:38.014922    3160 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0823 11:50:38.015011    3160 node_conditions.go:123] node cpu capacity is 2
	I0823 11:50:38.015071    3160 node_conditions.go:105] duration metric: took 179.495417ms to run NodePressure ...
	I0823 11:50:38.015104    3160 start.go:228] waiting for startup goroutines ...
	I0823 11:50:38.015124    3160 start.go:233] waiting for cluster config update ...
	I0823 11:50:38.015150    3160 start.go:242] writing updated cluster config ...
	I0823 11:50:38.016664    3160 ssh_runner.go:195] Run: rm -f paused
	I0823 11:50:38.081937    3160 start.go:600] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0823 11:50:38.085951    3160 out.go:177] 
	W0823 11:50:38.090167    3160 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0823 11:50:38.094129    3160 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0823 11:50:38.103139    3160 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-537000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-08-23 18:49:43 UTC, ends at Wed 2023-08-23 18:51:53 UTC. --
	Aug 23 18:51:22 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:22.705882838Z" level=warning msg="cleaning up after shim disconnected" id=20bf8b31e64bf48c8cad669a87b1fa380d38f07562e3a2285dd845de20fbb60e namespace=moby
	Aug 23 18:51:22 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:22.705886838Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:51:22 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:22.710220114Z" level=warning msg="cleanup warnings time=\"2023-08-23T18:51:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 23 18:51:36 ingress-addon-legacy-537000 dockerd[1071]: time="2023-08-23T18:51:36.973957107Z" level=info msg="ignoring event" container=a36e0003163c01a20b1e0160e36aba74388fd5465257cbb6fbf3a5d2d70991b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:51:36 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:36.974200600Z" level=info msg="shim disconnected" id=a36e0003163c01a20b1e0160e36aba74388fd5465257cbb6fbf3a5d2d70991b6 namespace=moby
	Aug 23 18:51:36 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:36.974233813Z" level=warning msg="cleaning up after shim disconnected" id=a36e0003163c01a20b1e0160e36aba74388fd5465257cbb6fbf3a5d2d70991b6 namespace=moby
	Aug 23 18:51:36 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:36.974240147Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:51:36 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:36.985236903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 23 18:51:36 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:36.985277284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:51:36 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:36.985284868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 23 18:51:36 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:36.985301079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 23 18:51:37 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:37.026065803Z" level=info msg="shim disconnected" id=f37339f7879ff8ae8e58ffd8a91912720cc98db9d9c8c4867a835384ce18163d namespace=moby
	Aug 23 18:51:37 ingress-addon-legacy-537000 dockerd[1071]: time="2023-08-23T18:51:37.026081680Z" level=info msg="ignoring event" container=f37339f7879ff8ae8e58ffd8a91912720cc98db9d9c8c4867a835384ce18163d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:51:37 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:37.026203155Z" level=warning msg="cleaning up after shim disconnected" id=f37339f7879ff8ae8e58ffd8a91912720cc98db9d9c8c4867a835384ce18163d namespace=moby
	Aug 23 18:51:37 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:37.026222324Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:51:48 ingress-addon-legacy-537000 dockerd[1071]: time="2023-08-23T18:51:48.432864341Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=babb2734c5e94f81dfd3305f6662e806b99a04dc4026e8b8b880411b03338779
	Aug 23 18:51:48 ingress-addon-legacy-537000 dockerd[1071]: time="2023-08-23T18:51:48.448506420Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=babb2734c5e94f81dfd3305f6662e806b99a04dc4026e8b8b880411b03338779
	Aug 23 18:51:48 ingress-addon-legacy-537000 dockerd[1071]: time="2023-08-23T18:51:48.512877703Z" level=info msg="ignoring event" container=babb2734c5e94f81dfd3305f6662e806b99a04dc4026e8b8b880411b03338779 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:51:48 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:48.513350885Z" level=info msg="shim disconnected" id=babb2734c5e94f81dfd3305f6662e806b99a04dc4026e8b8b880411b03338779 namespace=moby
	Aug 23 18:51:48 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:48.513523280Z" level=warning msg="cleaning up after shim disconnected" id=babb2734c5e94f81dfd3305f6662e806b99a04dc4026e8b8b880411b03338779 namespace=moby
	Aug 23 18:51:48 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:48.513534198Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 23 18:51:48 ingress-addon-legacy-537000 dockerd[1071]: time="2023-08-23T18:51:48.550017812Z" level=info msg="ignoring event" container=7f3117e21bd18ef4e48df6081ea55f9d5bb439d8410c9276eb83c541eae63863 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 23 18:51:48 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:48.550230171Z" level=info msg="shim disconnected" id=7f3117e21bd18ef4e48df6081ea55f9d5bb439d8410c9276eb83c541eae63863 namespace=moby
	Aug 23 18:51:48 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:48.550263592Z" level=warning msg="cleaning up after shim disconnected" id=7f3117e21bd18ef4e48df6081ea55f9d5bb439d8410c9276eb83c541eae63863 namespace=moby
	Aug 23 18:51:48 ingress-addon-legacy-537000 dockerd[1078]: time="2023-08-23T18:51:48.550269092Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	f37339f7879ff       13753a81eccfd                                                                                                      17 seconds ago       Exited              hello-world-app           2                   bacca33f1c2d0
	2a6d7d98d3a92       nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385                                      39 seconds ago       Running             nginx                     0                   ecfee538e7fc7
	babb2734c5e94       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   59 seconds ago       Exited              controller                0                   7f3117e21bd18
	7f064fc901aac       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              patch                     0                   74aed89bd1129
	5d36f380e3028       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   6f3fdcd3aec6f
	ab6c58aac87ec       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   4682ac85b7b09
	d2d2e8888098b       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   b00c3e357af76
	f7b0e1dcbdee5       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   1e058be0204ba
	42f8ccd72caeb       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   0996b958f4f9a
	eebd8f7b0a1de       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   bfab37ec80b0d
	1529b2eb57a7f       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   7fdb973f23a3e
	
	* 
	* ==> coredns [ab6c58aac87e] <==
	* [INFO] 172.17.0.1:33726 - 30189 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043634s
	[INFO] 172.17.0.1:33726 - 55860 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000028006s
	[INFO] 172.17.0.1:33726 - 63528 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000025547s
	[INFO] 172.17.0.1:33726 - 38928 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00002438s
	[INFO] 172.17.0.1:33726 - 24318 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000033174s
	[INFO] 172.17.0.1:1435 - 6040 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000010502s
	[INFO] 172.17.0.1:1435 - 43558 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009168s
	[INFO] 172.17.0.1:1435 - 5402 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000018337s
	[INFO] 172.17.0.1:1435 - 23172 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012752s
	[INFO] 172.17.0.1:1435 - 12925 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00001392s
	[INFO] 172.17.0.1:1435 - 52315 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000031174s
	[INFO] 172.17.0.1:64180 - 55710 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044093s
	[INFO] 172.17.0.1:27646 - 2654 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000057346s
	[INFO] 172.17.0.1:27646 - 35724 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000017753s
	[INFO] 172.17.0.1:64180 - 6827 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000023005s
	[INFO] 172.17.0.1:64180 - 63620 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000017962s
	[INFO] 172.17.0.1:27646 - 41093 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000010835s
	[INFO] 172.17.0.1:27646 - 39321 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000015795s
	[INFO] 172.17.0.1:64180 - 14319 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009627s
	[INFO] 172.17.0.1:27646 - 60456 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013295s
	[INFO] 172.17.0.1:27646 - 43366 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00000921s
	[INFO] 172.17.0.1:64180 - 22743 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026839s
	[INFO] 172.17.0.1:27646 - 32743 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000029714s
	[INFO] 172.17.0.1:64180 - 25298 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000022922s
	[INFO] 172.17.0.1:64180 - 39284 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000015253s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-537000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-537000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b0cad40b2c48c073112e0845fd1037a030ab0128
	                    minikube.k8s.io/name=ingress-addon-legacy-537000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_23T11_50_10_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 23 Aug 2023 18:50:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-537000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 23 Aug 2023 18:51:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 23 Aug 2023 18:51:47 +0000   Wed, 23 Aug 2023 18:50:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 23 Aug 2023 18:51:47 +0000   Wed, 23 Aug 2023 18:50:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 23 Aug 2023 18:51:47 +0000   Wed, 23 Aug 2023 18:50:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 23 Aug 2023 18:51:47 +0000   Wed, 23 Aug 2023 18:50:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-537000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b8296ab5789461bbf94474dabfcbc10
	  System UUID:                0b8296ab5789461bbf94474dabfcbc10
	  Boot ID:                    c9fa72ee-34ca-413c-b86c-7e4812e5c6f1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-dgrj6                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 coredns-66bff467f8-lf2vf                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     87s
	  kube-system                 etcd-ingress-addon-legacy-537000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-apiserver-ingress-addon-legacy-537000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-537000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-proxy-kdfcl                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-ingress-addon-legacy-537000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 97s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  97s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  97s   kubelet     Node ingress-addon-legacy-537000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s   kubelet     Node ingress-addon-legacy-537000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s   kubelet     Node ingress-addon-legacy-537000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                97s   kubelet     Node ingress-addon-legacy-537000 status is now: NodeReady
	  Normal  Starting                 86s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug23 18:49] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.654432] EINJ: EINJ table not found.
	[  +0.524585] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043181] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000852] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.044538] systemd-fstab-generator[483]: Ignoring "noauto" for root device
	[  +0.064999] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +0.440786] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[  +0.175241] systemd-fstab-generator[745]: Ignoring "noauto" for root device
	[  +0.083694] systemd-fstab-generator[756]: Ignoring "noauto" for root device
	[  +0.083502] systemd-fstab-generator[769]: Ignoring "noauto" for root device
	[  +1.147440] kauditd_printk_skb: 17 callbacks suppressed
	[  +3.249272] systemd-fstab-generator[1064]: Ignoring "noauto" for root device
	[  +4.425741] systemd-fstab-generator[1536]: Ignoring "noauto" for root device
	[Aug23 18:50] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.082549] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +5.996215] systemd-fstab-generator[2615]: Ignoring "noauto" for root device
	[ +16.911411] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.877016] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.469660] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Aug23 18:51] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [eebd8f7b0a1d] <==
	* raft2023/08/23 18:50:05 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/08/23 18:50:05 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/08/23 18:50:05 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/08/23 18:50:05 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-08-23 18:50:05.641991 W | auth: simple token is not cryptographically signed
	2023-08-23 18:50:05.642813 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-08-23 18:50:05.643713 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-23 18:50:05.643783 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-08-23 18:50:05.643942 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-08-23 18:50:05.644086 I | embed: listening for peers on 192.168.105.6:2380
	raft2023/08/23 18:50:05 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-08-23 18:50:05.644255 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	raft2023/08/23 18:50:06 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/08/23 18:50:06 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/08/23 18:50:06 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/08/23 18:50:06 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/08/23 18:50:06 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-08-23 18:50:06.475692 I | etcdserver: setting up the initial cluster version to 3.4
	2023-08-23 18:50:06.477192 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-08-23 18:50:06.477266 I | etcdserver/api: enabled capabilities for version 3.4
	2023-08-23 18:50:06.477313 I | etcdserver: published {Name:ingress-addon-legacy-537000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-08-23 18:50:06.477488 I | embed: ready to serve client requests
	2023-08-23 18:50:06.479405 I | embed: ready to serve client requests
	2023-08-23 18:50:06.481504 I | embed: serving client requests on 127.0.0.1:2379
	2023-08-23 18:50:06.500873 I | embed: serving client requests on 192.168.105.6:2379
	
	* 
	* ==> kernel <==
	*  18:51:53 up 2 min,  0 users,  load average: 0.87, 0.37, 0.14
	Linux ingress-addon-legacy-537000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [f7b0e1dcbdee] <==
	* I0823 18:50:07.937298       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I0823 18:50:08.019577       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0823 18:50:08.019590       1 cache.go:39] Caches are synced for autoregister controller
	I0823 18:50:08.019619       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0823 18:50:08.019631       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0823 18:50:08.038242       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0823 18:50:08.921097       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0823 18:50:08.921445       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0823 18:50:08.929526       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0823 18:50:08.936384       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0823 18:50:08.936426       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0823 18:50:09.082024       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0823 18:50:09.093551       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0823 18:50:09.181829       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0823 18:50:09.182317       1 controller.go:609] quota admission added evaluator for: endpoints
	I0823 18:50:09.183983       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0823 18:50:10.215605       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0823 18:50:10.445049       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0823 18:50:10.663962       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0823 18:50:16.866984       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0823 18:50:26.795187       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0823 18:50:26.843508       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0823 18:50:38.427432       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0823 18:51:10.544790       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0823 18:51:46.433157       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [42f8ccd72cae] <==
	* I0823 18:50:26.889322       1 shared_informer.go:230] Caches are synced for attach detach 
	I0823 18:50:26.891412       1 shared_informer.go:230] Caches are synced for stateful set 
	I0823 18:50:27.039926       1 shared_informer.go:230] Caches are synced for job 
	I0823 18:50:27.103000       1 shared_informer.go:230] Caches are synced for endpoint 
	I0823 18:50:27.144773       1 shared_informer.go:230] Caches are synced for taint 
	I0823 18:50:27.144853       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W0823 18:50:27.144876       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-537000. Assuming now as a timestamp.
	I0823 18:50:27.144893       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0823 18:50:27.144973       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0823 18:50:27.145075       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-537000", UID:"818ce115-1f05-40fc-9b3f-2d8181d0a805", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-537000 event: Registered Node ingress-addon-legacy-537000 in Controller
	I0823 18:50:27.195159       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"6a2423b8-dbed-4b83-b208-1c3888095333", APIVersion:"apps/v1", ResourceVersion:"346", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0823 18:50:27.222717       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"8cc5681d-f1fb-403b-9d45-58bd38b5b37c", APIVersion:"apps/v1", ResourceVersion:"347", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-rmccg
	I0823 18:50:27.291937       1 shared_informer.go:230] Caches are synced for resource quota 
	I0823 18:50:27.390125       1 shared_informer.go:230] Caches are synced for resource quota 
	I0823 18:50:27.390160       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0823 18:50:27.390169       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0823 18:50:27.390172       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0823 18:50:38.424277       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"c8e76bd2-949d-4e8b-a10e-ba2b6d61ffcf", APIVersion:"apps/v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0823 18:50:38.432247       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"dcfe66c7-5d54-4f8c-ba86-18e4f47f91c6", APIVersion:"apps/v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-ntgj6
	I0823 18:50:38.438729       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"9262752e-22ee-44f4-a302-30886a1dfc19", APIVersion:"batch/v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-hrr4r
	I0823 18:50:38.464933       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"0e300578-47f4-452c-8c69-28f145ea1c9b", APIVersion:"batch/v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-5tdw9
	I0823 18:50:42.171660       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"0e300578-47f4-452c-8c69-28f145ea1c9b", APIVersion:"batch/v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0823 18:50:42.184615       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"9262752e-22ee-44f4-a302-30886a1dfc19", APIVersion:"batch/v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0823 18:51:19.796393       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"ae89ea59-cf1e-43ae-88d0-2a59b7942a01", APIVersion:"apps/v1", ResourceVersion:"538", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0823 18:51:19.798703       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"b5153637-36a5-477e-be4d-6e924fe49d55", APIVersion:"apps/v1", ResourceVersion:"539", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-dgrj6
	
	* 
	* ==> kube-proxy [d2d2e8888098] <==
	* W0823 18:50:27.489435       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0823 18:50:27.495656       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0823 18:50:27.495680       1 server_others.go:186] Using iptables Proxier.
	I0823 18:50:27.495882       1 server.go:583] Version: v1.18.20
	I0823 18:50:27.496505       1 config.go:315] Starting service config controller
	I0823 18:50:27.496516       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0823 18:50:27.498282       1 config.go:133] Starting endpoints config controller
	I0823 18:50:27.498298       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0823 18:50:27.596637       1 shared_informer.go:230] Caches are synced for service config 
	I0823 18:50:27.598416       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [1529b2eb57a7] <==
	* I0823 18:50:07.968278       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0823 18:50:07.968291       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0823 18:50:07.969221       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0823 18:50:07.969257       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0823 18:50:07.969266       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0823 18:50:07.969280       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0823 18:50:07.970848       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0823 18:50:07.970924       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0823 18:50:07.970961       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0823 18:50:07.970988       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0823 18:50:07.971033       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0823 18:50:07.971063       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0823 18:50:07.971137       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0823 18:50:07.971241       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0823 18:50:07.971298       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0823 18:50:07.971436       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0823 18:50:07.971484       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0823 18:50:07.971563       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0823 18:50:08.821463       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0823 18:50:08.973338       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0823 18:50:08.978323       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0823 18:50:08.978372       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0823 18:50:09.030276       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0823 18:50:10.769358       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0823 18:50:26.812939       1 factory.go:503] pod: kube-system/coredns-66bff467f8-rmccg is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-23 18:49:43 UTC, ends at Wed 2023-08-23 18:51:53 UTC. --
	Aug 23 18:51:31 ingress-addon-legacy-537000 kubelet[2621]: I0823 18:51:31.910126    2621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fb52862f90b0775e4e51748aeb6f2ed9d80165042958aa1a056f4fba3e86e88d
	Aug 23 18:51:31 ingress-addon-legacy-537000 kubelet[2621]: E0823 18:51:31.912095    2621 pod_workers.go:191] Error syncing pod f3b94f52-83f2-4484-8a4f-47cf3fec985d ("kube-ingress-dns-minikube_kube-system(f3b94f52-83f2-4484-8a4f-47cf3fec985d)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f3b94f52-83f2-4484-8a4f-47cf3fec985d)"
	Aug 23 18:51:35 ingress-addon-legacy-537000 kubelet[2621]: I0823 18:51:35.257765    2621 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-7hqvv" (UniqueName: "kubernetes.io/secret/f3b94f52-83f2-4484-8a4f-47cf3fec985d-minikube-ingress-dns-token-7hqvv") pod "f3b94f52-83f2-4484-8a4f-47cf3fec985d" (UID: "f3b94f52-83f2-4484-8a4f-47cf3fec985d")
	Aug 23 18:51:35 ingress-addon-legacy-537000 kubelet[2621]: I0823 18:51:35.266149    2621 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3b94f52-83f2-4484-8a4f-47cf3fec985d-minikube-ingress-dns-token-7hqvv" (OuterVolumeSpecName: "minikube-ingress-dns-token-7hqvv") pod "f3b94f52-83f2-4484-8a4f-47cf3fec985d" (UID: "f3b94f52-83f2-4484-8a4f-47cf3fec985d"). InnerVolumeSpecName "minikube-ingress-dns-token-7hqvv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 23 18:51:35 ingress-addon-legacy-537000 kubelet[2621]: I0823 18:51:35.358247    2621 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-7hqvv" (UniqueName: "kubernetes.io/secret/f3b94f52-83f2-4484-8a4f-47cf3fec985d-minikube-ingress-dns-token-7hqvv") on node "ingress-addon-legacy-537000" DevicePath ""
	Aug 23 18:51:36 ingress-addon-legacy-537000 kubelet[2621]: I0823 18:51:36.911380    2621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 20bf8b31e64bf48c8cad669a87b1fa380d38f07562e3a2285dd845de20fbb60e
	Aug 23 18:51:37 ingress-addon-legacy-537000 kubelet[2621]: W0823 18:51:37.043340    2621 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod3fe2e2e9-7362-4cfd-9acd-dc928750b5bc/f37339f7879ff8ae8e58ffd8a91912720cc98db9d9c8c4867a835384ce18163d": none of the resources are being tracked.
	Aug 23 18:51:37 ingress-addon-legacy-537000 kubelet[2621]: I0823 18:51:37.885983    2621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fb52862f90b0775e4e51748aeb6f2ed9d80165042958aa1a056f4fba3e86e88d
	Aug 23 18:51:37 ingress-addon-legacy-537000 kubelet[2621]: W0823 18:51:37.891500    2621 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-dgrj6 through plugin: invalid network status for
	Aug 23 18:51:37 ingress-addon-legacy-537000 kubelet[2621]: I0823 18:51:37.901486    2621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f37339f7879ff8ae8e58ffd8a91912720cc98db9d9c8c4867a835384ce18163d
	Aug 23 18:51:37 ingress-addon-legacy-537000 kubelet[2621]: E0823 18:51:37.901833    2621 pod_workers.go:191] Error syncing pod 3fe2e2e9-7362-4cfd-9acd-dc928750b5bc ("hello-world-app-5f5d8b66bb-dgrj6_default(3fe2e2e9-7362-4cfd-9acd-dc928750b5bc)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-dgrj6_default(3fe2e2e9-7362-4cfd-9acd-dc928750b5bc)"
	Aug 23 18:51:37 ingress-addon-legacy-537000 kubelet[2621]: I0823 18:51:37.907390    2621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 20bf8b31e64bf48c8cad669a87b1fa380d38f07562e3a2285dd845de20fbb60e
	Aug 23 18:51:38 ingress-addon-legacy-537000 kubelet[2621]: W0823 18:51:38.918387    2621 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-dgrj6 through plugin: invalid network status for
	Aug 23 18:51:46 ingress-addon-legacy-537000 kubelet[2621]: E0823 18:51:46.423815    2621 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-ntgj6.177e17d358243bf9", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-ntgj6", UID:"d4602378-8175-402f-a8b0-5e39b7f5678a", APIVersion:"v1", ResourceVersion:"421", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-537000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc131b330992a27f9, ext:96002786884, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc131b330992a27f9, ext:96002786884, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-ntgj6.177e17d358243bf9" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 23 18:51:46 ingress-addon-legacy-537000 kubelet[2621]: E0823 18:51:46.441242    2621 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-ntgj6.177e17d358243bf9", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-ntgj6", UID:"d4602378-8175-402f-a8b0-5e39b7f5678a", APIVersion:"v1", ResourceVersion:"421", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-537000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc131b330992a27f9, ext:96002786884, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc131b33099af9429, ext:96011530868, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-ntgj6.177e17d358243bf9" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 23 18:51:49 ingress-addon-legacy-537000 kubelet[2621]: W0823 18:51:49.077321    2621 pod_container_deletor.go:77] Container "7f3117e21bd18ef4e48df6081ea55f9d5bb439d8410c9276eb83c541eae63863" not found in pod's containers
	Aug 23 18:51:49 ingress-addon-legacy-537000 kubelet[2621]: I0823 18:51:49.911366    2621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f37339f7879ff8ae8e58ffd8a91912720cc98db9d9c8c4867a835384ce18163d
	Aug 23 18:51:49 ingress-addon-legacy-537000 kubelet[2621]: E0823 18:51:49.912070    2621 pod_workers.go:191] Error syncing pod 3fe2e2e9-7362-4cfd-9acd-dc928750b5bc ("hello-world-app-5f5d8b66bb-dgrj6_default(3fe2e2e9-7362-4cfd-9acd-dc928750b5bc)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-dgrj6_default(3fe2e2e9-7362-4cfd-9acd-dc928750b5bc)"
	Aug 23 18:51:50 ingress-addon-legacy-537000 kubelet[2621]: I0823 18:51:50.601787    2621 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-756f4" (UniqueName: "kubernetes.io/secret/d4602378-8175-402f-a8b0-5e39b7f5678a-ingress-nginx-token-756f4") pod "d4602378-8175-402f-a8b0-5e39b7f5678a" (UID: "d4602378-8175-402f-a8b0-5e39b7f5678a")
	Aug 23 18:51:50 ingress-addon-legacy-537000 kubelet[2621]: I0823 18:51:50.601878    2621 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d4602378-8175-402f-a8b0-5e39b7f5678a-webhook-cert") pod "d4602378-8175-402f-a8b0-5e39b7f5678a" (UID: "d4602378-8175-402f-a8b0-5e39b7f5678a")
	Aug 23 18:51:50 ingress-addon-legacy-537000 kubelet[2621]: I0823 18:51:50.611869    2621 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4602378-8175-402f-a8b0-5e39b7f5678a-ingress-nginx-token-756f4" (OuterVolumeSpecName: "ingress-nginx-token-756f4") pod "d4602378-8175-402f-a8b0-5e39b7f5678a" (UID: "d4602378-8175-402f-a8b0-5e39b7f5678a"). InnerVolumeSpecName "ingress-nginx-token-756f4". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 23 18:51:50 ingress-addon-legacy-537000 kubelet[2621]: I0823 18:51:50.612193    2621 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4602378-8175-402f-a8b0-5e39b7f5678a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d4602378-8175-402f-a8b0-5e39b7f5678a" (UID: "d4602378-8175-402f-a8b0-5e39b7f5678a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 23 18:51:50 ingress-addon-legacy-537000 kubelet[2621]: I0823 18:51:50.702329    2621 reconciler.go:319] Volume detached for volume "ingress-nginx-token-756f4" (UniqueName: "kubernetes.io/secret/d4602378-8175-402f-a8b0-5e39b7f5678a-ingress-nginx-token-756f4") on node "ingress-addon-legacy-537000" DevicePath ""
	Aug 23 18:51:50 ingress-addon-legacy-537000 kubelet[2621]: I0823 18:51:50.702431    2621 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d4602378-8175-402f-a8b0-5e39b7f5678a-webhook-cert") on node "ingress-addon-legacy-537000" DevicePath ""
	Aug 23 18:51:50 ingress-addon-legacy-537000 kubelet[2621]: W0823 18:51:50.930459    2621 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/d4602378-8175-402f-a8b0-5e39b7f5678a/volumes" does not exist
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-537000 -n ingress-addon-legacy-537000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-537000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (58.00s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-354000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-354000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.465831084s)

                                                
                                                
-- stdout --
	* [mount-start-1-354000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-354000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-354000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-354000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-354000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-354000 -n mount-start-1-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-354000 -n mount-start-1-354000: exit status 7 (68.725917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-354000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.54s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-831000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
E0823 11:54:08.143133    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
E0823 11:54:10.952109    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-831000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.013597959s)

                                                
                                                
-- stdout --
	* [multinode-831000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-831000 in cluster multinode-831000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-831000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:54:05.761077    3456 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:54:05.761186    3456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:54:05.761189    3456 out.go:309] Setting ErrFile to fd 2...
	I0823 11:54:05.761191    3456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:54:05.761293    3456 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:54:05.762298    3456 out.go:303] Setting JSON to false
	I0823 11:54:05.777504    3456 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3219,"bootTime":1692813626,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:54:05.777562    3456 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:54:05.782169    3456 out.go:177] * [multinode-831000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:54:05.789152    3456 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:54:05.792924    3456 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:54:05.789206    3456 notify.go:220] Checking for updates...
	I0823 11:54:05.799102    3456 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:54:05.802135    3456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:54:05.805089    3456 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:54:05.808114    3456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:54:05.811331    3456 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:54:05.815043    3456 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 11:54:05.822147    3456 start.go:298] selected driver: qemu2
	I0823 11:54:05.822152    3456 start.go:902] validating driver "qemu2" against <nil>
	I0823 11:54:05.822165    3456 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:54:05.824121    3456 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 11:54:05.827042    3456 out.go:177] * Automatically selected the socket_vmnet network
	I0823 11:54:05.830165    3456 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 11:54:05.830187    3456 cni.go:84] Creating CNI manager for ""
	I0823 11:54:05.830192    3456 cni.go:136] 0 nodes found, recommending kindnet
	I0823 11:54:05.830195    3456 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0823 11:54:05.830201    3456 start_flags.go:319] config:
	{Name:multinode-831000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:multinode-831000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0}
	I0823 11:54:05.834233    3456 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:54:05.839070    3456 out.go:177] * Starting control plane node multinode-831000 in cluster multinode-831000
	I0823 11:54:05.843131    3456 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:54:05.843151    3456 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 11:54:05.843163    3456 cache.go:57] Caching tarball of preloaded images
	I0823 11:54:05.843234    3456 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 11:54:05.843241    3456 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 11:54:05.843499    3456 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/multinode-831000/config.json ...
	I0823 11:54:05.843513    3456 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/multinode-831000/config.json: {Name:mk19ccf5bb91b7e1728fefc1b7eb1a92cc10a373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:54:05.843755    3456 start.go:365] acquiring machines lock for multinode-831000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:54:05.843789    3456 start.go:369] acquired machines lock for "multinode-831000" in 28.042µs
	I0823 11:54:05.843812    3456 start.go:93] Provisioning new machine with config: &{Name:multinode-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.0 ClusterName:multinode-831000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:54:05.843838    3456 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:54:05.848081    3456 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 11:54:05.863936    3456 start.go:159] libmachine.API.Create for "multinode-831000" (driver="qemu2")
	I0823 11:54:05.863957    3456 client.go:168] LocalClient.Create starting
	I0823 11:54:05.864004    3456 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:54:05.864028    3456 main.go:141] libmachine: Decoding PEM data...
	I0823 11:54:05.864038    3456 main.go:141] libmachine: Parsing certificate...
	I0823 11:54:05.864076    3456 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:54:05.864099    3456 main.go:141] libmachine: Decoding PEM data...
	I0823 11:54:05.864111    3456 main.go:141] libmachine: Parsing certificate...
	I0823 11:54:05.864409    3456 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:54:06.021591    3456 main.go:141] libmachine: Creating SSH key...
	I0823 11:54:06.142115    3456 main.go:141] libmachine: Creating Disk image...
	I0823 11:54:06.142120    3456 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:54:06.142263    3456 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2
	I0823 11:54:06.150772    3456 main.go:141] libmachine: STDOUT: 
	I0823 11:54:06.150784    3456 main.go:141] libmachine: STDERR: 
	I0823 11:54:06.150829    3456 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2 +20000M
	I0823 11:54:06.157976    3456 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:54:06.157994    3456 main.go:141] libmachine: STDERR: 
	I0823 11:54:06.158013    3456 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2
	I0823 11:54:06.158019    3456 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:54:06.158057    3456 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:a1:96:1c:24:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2
	I0823 11:54:06.159632    3456 main.go:141] libmachine: STDOUT: 
	I0823 11:54:06.159644    3456 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:54:06.159664    3456 client.go:171] LocalClient.Create took 295.708416ms
	I0823 11:54:08.161763    3456 start.go:128] duration metric: createHost completed in 2.317987209s
	I0823 11:54:08.161823    3456 start.go:83] releasing machines lock for "multinode-831000", held for 2.318103458s
	W0823 11:54:08.161904    3456 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:54:08.172005    3456 out.go:177] * Deleting "multinode-831000" in qemu2 ...
	W0823 11:54:08.191066    3456 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:54:08.191098    3456 start.go:687] Will try again in 5 seconds ...
	I0823 11:54:13.193197    3456 start.go:365] acquiring machines lock for multinode-831000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:54:13.193654    3456 start.go:369] acquired machines lock for "multinode-831000" in 347.125µs
	I0823 11:54:13.193780    3456 start.go:93] Provisioning new machine with config: &{Name:multinode-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.0 ClusterName:multinode-831000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:54:13.194100    3456 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:54:13.203703    3456 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 11:54:13.251702    3456 start.go:159] libmachine.API.Create for "multinode-831000" (driver="qemu2")
	I0823 11:54:13.251741    3456 client.go:168] LocalClient.Create starting
	I0823 11:54:13.251883    3456 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:54:13.251940    3456 main.go:141] libmachine: Decoding PEM data...
	I0823 11:54:13.251962    3456 main.go:141] libmachine: Parsing certificate...
	I0823 11:54:13.252039    3456 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:54:13.252079    3456 main.go:141] libmachine: Decoding PEM data...
	I0823 11:54:13.252091    3456 main.go:141] libmachine: Parsing certificate...
	I0823 11:54:13.252630    3456 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:54:13.524179    3456 main.go:141] libmachine: Creating SSH key...
	I0823 11:54:13.682907    3456 main.go:141] libmachine: Creating Disk image...
	I0823 11:54:13.682918    3456 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:54:13.683063    3456 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2
	I0823 11:54:13.691580    3456 main.go:141] libmachine: STDOUT: 
	I0823 11:54:13.691593    3456 main.go:141] libmachine: STDERR: 
	I0823 11:54:13.691639    3456 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2 +20000M
	I0823 11:54:13.698790    3456 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:54:13.698806    3456 main.go:141] libmachine: STDERR: 
	I0823 11:54:13.698824    3456 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2
	I0823 11:54:13.698828    3456 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:54:13.698881    3456 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:43:d7:fa:24:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2
	I0823 11:54:13.700430    3456 main.go:141] libmachine: STDOUT: 
	I0823 11:54:13.700444    3456 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:54:13.700459    3456 client.go:171] LocalClient.Create took 448.72675ms
	I0823 11:54:15.702579    3456 start.go:128] duration metric: createHost completed in 2.508511416s
	I0823 11:54:15.702641    3456 start.go:83] releasing machines lock for "multinode-831000", held for 2.509049958s
	W0823 11:54:15.703077    3456 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:54:15.717311    3456 out.go:177] 
	W0823 11:54:15.721242    3456 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 11:54:15.721277    3456 out.go:239] * 
	* 
	W0823 11:54:15.723783    3456 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 11:54:15.734164    3456 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-831000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000: exit status 7 (68.137625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (118.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (128.880791ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-831000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- rollout status deployment/busybox: exit status 1 (56.044209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.423666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.445375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.546584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.421542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.817334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.774917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.605583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.861333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.558167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.447416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0823 11:55:30.062860    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
E0823 11:55:55.686707    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
E0823 11:55:55.695931    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
E0823 11:55:55.707308    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
E0823 11:55:55.729407    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
E0823 11:55:55.771479    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
E0823 11:55:55.853574    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
E0823 11:55:56.015668    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
E0823 11:55:56.337756    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
E0823 11:55:56.980018    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
E0823 11:55:58.262130    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
E0823 11:56:00.824444    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
E0823 11:56:05.946647    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.887542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.793208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- exec  -- nslookup kubernetes.io: exit status 1 (54.655042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.133583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.340834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000: exit status 7 (29.345042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (118.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-831000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.926167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000: exit status 7 (29.704667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-831000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-831000 -v 3 --alsologtostderr: exit status 89 (39.88725ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-831000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:56:14.235473    3534 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:56:14.235640    3534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:14.235643    3534 out.go:309] Setting ErrFile to fd 2...
	I0823 11:56:14.235645    3534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:14.235758    3534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:56:14.236230    3534 mustload.go:65] Loading cluster: multinode-831000
	I0823 11:56:14.236663    3534 config.go:182] Loaded profile config "multinode-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:56:14.240529    3534 out.go:177] * The control plane node must be running for this command
	I0823 11:56:14.243682    3534 out.go:177]   To start a cluster, run: "minikube start -p multinode-831000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-831000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000: exit status 7 (29.502167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-831000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-831000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-831000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.0\",\"ClusterName\":\"multinode-831000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"P
ort\":8443,\"KubernetesVersion\":\"v1.28.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/so
cket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000: exit status 7 (34.842708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-831000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-831000 status --output json --alsologtostderr: exit status 7 (29.845459ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-831000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:56:14.472426    3544 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:56:14.472568    3544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:14.472571    3544 out.go:309] Setting ErrFile to fd 2...
	I0823 11:56:14.472574    3544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:14.472702    3544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:56:14.472812    3544 out.go:303] Setting JSON to true
	I0823 11:56:14.472825    3544 mustload.go:65] Loading cluster: multinode-831000
	I0823 11:56:14.472870    3544 notify.go:220] Checking for updates...
	I0823 11:56:14.473009    3544 config.go:182] Loaded profile config "multinode-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:56:14.473014    3544 status.go:255] checking status of multinode-831000 ...
	I0823 11:56:14.473210    3544 status.go:330] multinode-831000 host status = "Stopped" (err=<nil>)
	I0823 11:56:14.473217    3544 status.go:343] host is not running, skipping remaining checks
	I0823 11:56:14.473220    3544 status.go:257] multinode-831000 status: &{Name:multinode-831000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-831000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000: exit status 7 (29.846375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-831000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-831000 node stop m03: exit status 85 (47.501167ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-831000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-831000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-831000 status: exit status 7 (30.018792ms)

                                                
                                                
-- stdout --
	multinode-831000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-831000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-831000 status --alsologtostderr: exit status 7 (29.768084ms)

                                                
                                                
-- stdout --
	multinode-831000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:56:14.610479    3552 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:56:14.610610    3552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:14.610613    3552 out.go:309] Setting ErrFile to fd 2...
	I0823 11:56:14.610616    3552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:14.610742    3552 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:56:14.610853    3552 out.go:303] Setting JSON to false
	I0823 11:56:14.610865    3552 mustload.go:65] Loading cluster: multinode-831000
	I0823 11:56:14.610920    3552 notify.go:220] Checking for updates...
	I0823 11:56:14.611063    3552 config.go:182] Loaded profile config "multinode-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:56:14.611069    3552 status.go:255] checking status of multinode-831000 ...
	I0823 11:56:14.611248    3552 status.go:330] multinode-831000 host status = "Stopped" (err=<nil>)
	I0823 11:56:14.611251    3552 status.go:343] host is not running, skipping remaining checks
	I0823 11:56:14.611253    3552 status.go:257] multinode-831000 status: &{Name:multinode-831000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-831000 status --alsologtostderr": multinode-831000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000: exit status 7 (29.34775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-831000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-831000 node start m03 --alsologtostderr: exit status 85 (47.429417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:56:14.669906    3556 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:56:14.670108    3556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:14.670111    3556 out.go:309] Setting ErrFile to fd 2...
	I0823 11:56:14.670113    3556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:14.670236    3556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:56:14.670445    3556 mustload.go:65] Loading cluster: multinode-831000
	I0823 11:56:14.670606    3556 config.go:182] Loaded profile config "multinode-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:56:14.675227    3556 out.go:177] 
	W0823 11:56:14.678273    3556 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0823 11:56:14.678277    3556 out.go:239] * 
	* 
	W0823 11:56:14.679924    3556 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 11:56:14.683336    3556 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0823 11:56:14.669906    3556 out.go:296] Setting OutFile to fd 1 ...
I0823 11:56:14.670108    3556 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 11:56:14.670111    3556 out.go:309] Setting ErrFile to fd 2...
I0823 11:56:14.670113    3556 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 11:56:14.670236    3556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
I0823 11:56:14.670445    3556 mustload.go:65] Loading cluster: multinode-831000
I0823 11:56:14.670606    3556 config.go:182] Loaded profile config "multinode-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
I0823 11:56:14.675227    3556 out.go:177] 
W0823 11:56:14.678273    3556 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0823 11:56:14.678277    3556 out.go:239] * 
* 
W0823 11:56:14.679924    3556 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0823 11:56:14.683336    3556 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-831000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-831000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-831000 status: exit status 7 (29.961375ms)

                                                
                                                
-- stdout --
	multinode-831000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-831000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000: exit status 7 (29.29025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-831000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-831000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-831000 --wait=true -v=8 --alsologtostderr
E0823 11:56:16.188784    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-831000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.180978875s)

                                                
                                                
-- stdout --
	* [multinode-831000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-831000 in cluster multinode-831000
	* Restarting existing qemu2 VM for "multinode-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:56:14.866660    3566 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:56:14.866770    3566 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:14.866773    3566 out.go:309] Setting ErrFile to fd 2...
	I0823 11:56:14.866776    3566 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:14.866880    3566 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:56:14.867860    3566 out.go:303] Setting JSON to false
	I0823 11:56:14.883085    3566 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3348,"bootTime":1692813626,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:56:14.883154    3566 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:56:14.887495    3566 out.go:177] * [multinode-831000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:56:14.894310    3566 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:56:14.898230    3566 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:56:14.894349    3566 notify.go:220] Checking for updates...
	I0823 11:56:14.904226    3566 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:56:14.907256    3566 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:56:14.910316    3566 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:56:14.913310    3566 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:56:14.916564    3566 config.go:182] Loaded profile config "multinode-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:56:14.916610    3566 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:56:14.921275    3566 out.go:177] * Using the qemu2 driver based on existing profile
	I0823 11:56:14.928236    3566 start.go:298] selected driver: qemu2
	I0823 11:56:14.928241    3566 start.go:902] validating driver "qemu2" against &{Name:multinode-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.28.0 ClusterName:multinode-831000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:56:14.928331    3566 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:56:14.930243    3566 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 11:56:14.930268    3566 cni.go:84] Creating CNI manager for ""
	I0823 11:56:14.930272    3566 cni.go:136] 1 nodes found, recommending kindnet
	I0823 11:56:14.930278    3566 start_flags.go:319] config:
	{Name:multinode-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:multinode-831000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:56:14.934336    3566 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:56:14.935751    3566 out.go:177] * Starting control plane node multinode-831000 in cluster multinode-831000
	I0823 11:56:14.943296    3566 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:56:14.943329    3566 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 11:56:14.943342    3566 cache.go:57] Caching tarball of preloaded images
	I0823 11:56:14.943406    3566 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 11:56:14.943418    3566 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 11:56:14.943485    3566 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/multinode-831000/config.json ...
	I0823 11:56:14.943834    3566 start.go:365] acquiring machines lock for multinode-831000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:56:14.943864    3566 start.go:369] acquired machines lock for "multinode-831000" in 24.167µs
	I0823 11:56:14.943873    3566 start.go:96] Skipping create...Using existing machine configuration
	I0823 11:56:14.943877    3566 fix.go:54] fixHost starting: 
	I0823 11:56:14.943994    3566 fix.go:102] recreateIfNeeded on multinode-831000: state=Stopped err=<nil>
	W0823 11:56:14.944002    3566 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 11:56:14.948245    3566 out.go:177] * Restarting existing qemu2 VM for "multinode-831000" ...
	I0823 11:56:14.956107    3566 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:43:d7:fa:24:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2
	I0823 11:56:14.957858    3566 main.go:141] libmachine: STDOUT: 
	I0823 11:56:14.957874    3566 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:56:14.957902    3566 fix.go:56] fixHost completed within 14.022542ms
	I0823 11:56:14.957910    3566 start.go:83] releasing machines lock for "multinode-831000", held for 14.042625ms
	W0823 11:56:14.957916    3566 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 11:56:14.957947    3566 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:56:14.957951    3566 start.go:687] Will try again in 5 seconds ...
	I0823 11:56:19.959918    3566 start.go:365] acquiring machines lock for multinode-831000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:56:19.960298    3566 start.go:369] acquired machines lock for "multinode-831000" in 300.375µs
	I0823 11:56:19.960392    3566 start.go:96] Skipping create...Using existing machine configuration
	I0823 11:56:19.960412    3566 fix.go:54] fixHost starting: 
	I0823 11:56:19.961117    3566 fix.go:102] recreateIfNeeded on multinode-831000: state=Stopped err=<nil>
	W0823 11:56:19.961144    3566 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 11:56:19.965626    3566 out.go:177] * Restarting existing qemu2 VM for "multinode-831000" ...
	I0823 11:56:19.973626    3566 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:43:d7:fa:24:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2
	I0823 11:56:19.983290    3566 main.go:141] libmachine: STDOUT: 
	I0823 11:56:19.983357    3566 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:56:19.983449    3566 fix.go:56] fixHost completed within 23.03875ms
	I0823 11:56:19.983471    3566 start.go:83] releasing machines lock for "multinode-831000", held for 23.151375ms
	W0823 11:56:19.983722    3566 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-831000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-831000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:56:19.991558    3566 out.go:177] 
	W0823 11:56:19.995577    3566 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 11:56:19.995605    3566 out.go:239] * 
	* 
	W0823 11:56:19.998129    3566 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 11:56:20.006606    3566 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-831000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-831000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000: exit status 7 (32.826208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-831000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-831000 node delete m03: exit status 89 (39.5635ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-831000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-831000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-831000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-831000 status --alsologtostderr: exit status 7 (29.757542ms)

                                                
                                                
-- stdout --
	multinode-831000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:56:20.190654    3580 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:56:20.190810    3580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:20.190813    3580 out.go:309] Setting ErrFile to fd 2...
	I0823 11:56:20.190815    3580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:20.190918    3580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:56:20.191030    3580 out.go:303] Setting JSON to false
	I0823 11:56:20.191042    3580 mustload.go:65] Loading cluster: multinode-831000
	I0823 11:56:20.191101    3580 notify.go:220] Checking for updates...
	I0823 11:56:20.191213    3580 config.go:182] Loaded profile config "multinode-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:56:20.191219    3580 status.go:255] checking status of multinode-831000 ...
	I0823 11:56:20.191399    3580 status.go:330] multinode-831000 host status = "Stopped" (err=<nil>)
	I0823 11:56:20.191403    3580 status.go:343] host is not running, skipping remaining checks
	I0823 11:56:20.191405    3580 status.go:257] multinode-831000 status: &{Name:multinode-831000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-831000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000: exit status 7 (29.186917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-831000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-831000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-831000 status: exit status 7 (30.163125ms)

                                                
                                                
-- stdout --
	multinode-831000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-831000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-831000 status --alsologtostderr: exit status 7 (29.556959ms)

                                                
                                                
-- stdout --
	multinode-831000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:56:20.340564    3588 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:56:20.340705    3588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:20.340708    3588 out.go:309] Setting ErrFile to fd 2...
	I0823 11:56:20.340710    3588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:20.340823    3588 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:56:20.340949    3588 out.go:303] Setting JSON to false
	I0823 11:56:20.340960    3588 mustload.go:65] Loading cluster: multinode-831000
	I0823 11:56:20.341024    3588 notify.go:220] Checking for updates...
	I0823 11:56:20.341144    3588 config.go:182] Loaded profile config "multinode-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:56:20.341150    3588 status.go:255] checking status of multinode-831000 ...
	I0823 11:56:20.341343    3588 status.go:330] multinode-831000 host status = "Stopped" (err=<nil>)
	I0823 11:56:20.341346    3588 status.go:343] host is not running, skipping remaining checks
	I0823 11:56:20.341349    3588 status.go:257] multinode-831000 status: &{Name:multinode-831000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-831000 status --alsologtostderr": multinode-831000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-831000 status --alsologtostderr": multinode-831000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000: exit status 7 (29.265125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-831000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-831000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.178111959s)

                                                
                                                
-- stdout --
	* [multinode-831000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-831000 in cluster multinode-831000
	* Restarting existing qemu2 VM for "multinode-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:56:20.398956    3592 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:56:20.399060    3592 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:20.399063    3592 out.go:309] Setting ErrFile to fd 2...
	I0823 11:56:20.399065    3592 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:20.399173    3592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:56:20.400128    3592 out.go:303] Setting JSON to false
	I0823 11:56:20.415368    3592 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3354,"bootTime":1692813626,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:56:20.415434    3592 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:56:20.420423    3592 out.go:177] * [multinode-831000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:56:20.427533    3592 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:56:20.431464    3592 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:56:20.427602    3592 notify.go:220] Checking for updates...
	I0823 11:56:20.437444    3592 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:56:20.440433    3592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:56:20.443451    3592 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:56:20.446467    3592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:56:20.449677    3592 config.go:182] Loaded profile config "multinode-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:56:20.449942    3592 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:56:20.454462    3592 out.go:177] * Using the qemu2 driver based on existing profile
	I0823 11:56:20.460341    3592 start.go:298] selected driver: qemu2
	I0823 11:56:20.460346    3592 start.go:902] validating driver "qemu2" against &{Name:multinode-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.28.0 ClusterName:multinode-831000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:56:20.460417    3592 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:56:20.462357    3592 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 11:56:20.462386    3592 cni.go:84] Creating CNI manager for ""
	I0823 11:56:20.462390    3592 cni.go:136] 1 nodes found, recommending kindnet
	I0823 11:56:20.462396    3592 start_flags.go:319] config:
	{Name:multinode-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:multinode-831000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:56:20.466641    3592 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:56:20.469532    3592 out.go:177] * Starting control plane node multinode-831000 in cluster multinode-831000
	I0823 11:56:20.477444    3592 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:56:20.477461    3592 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 11:56:20.477474    3592 cache.go:57] Caching tarball of preloaded images
	I0823 11:56:20.477525    3592 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 11:56:20.477530    3592 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 11:56:20.477614    3592 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/multinode-831000/config.json ...
	I0823 11:56:20.477924    3592 start.go:365] acquiring machines lock for multinode-831000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:56:20.477949    3592 start.go:369] acquired machines lock for "multinode-831000" in 19.291µs
	I0823 11:56:20.477957    3592 start.go:96] Skipping create...Using existing machine configuration
	I0823 11:56:20.477961    3592 fix.go:54] fixHost starting: 
	I0823 11:56:20.478072    3592 fix.go:102] recreateIfNeeded on multinode-831000: state=Stopped err=<nil>
	W0823 11:56:20.478080    3592 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 11:56:20.486426    3592 out.go:177] * Restarting existing qemu2 VM for "multinode-831000" ...
	I0823 11:56:20.490404    3592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:43:d7:fa:24:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2
	I0823 11:56:20.492220    3592 main.go:141] libmachine: STDOUT: 
	I0823 11:56:20.492232    3592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:56:20.492261    3592 fix.go:56] fixHost completed within 14.296792ms
	I0823 11:56:20.492267    3592 start.go:83] releasing machines lock for "multinode-831000", held for 14.31525ms
	W0823 11:56:20.492273    3592 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 11:56:20.492305    3592 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:56:20.492312    3592 start.go:687] Will try again in 5 seconds ...
	I0823 11:56:25.494305    3592 start.go:365] acquiring machines lock for multinode-831000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:56:25.494786    3592 start.go:369] acquired machines lock for "multinode-831000" in 402.75µs
	I0823 11:56:25.494951    3592 start.go:96] Skipping create...Using existing machine configuration
	I0823 11:56:25.494970    3592 fix.go:54] fixHost starting: 
	I0823 11:56:25.495761    3592 fix.go:102] recreateIfNeeded on multinode-831000: state=Stopped err=<nil>
	W0823 11:56:25.495785    3592 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 11:56:25.500150    3592 out.go:177] * Restarting existing qemu2 VM for "multinode-831000" ...
	I0823 11:56:25.504378    3592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:43:d7:fa:24:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/multinode-831000/disk.qcow2
	I0823 11:56:25.513775    3592 main.go:141] libmachine: STDOUT: 
	I0823 11:56:25.513850    3592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:56:25.513962    3592 fix.go:56] fixHost completed within 18.993792ms
	I0823 11:56:25.513981    3592 start.go:83] releasing machines lock for "multinode-831000", held for 19.170541ms
	W0823 11:56:25.514174    3592 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-831000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-831000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:56:25.522172    3592 out.go:177] 
	W0823 11:56:25.526367    3592 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 11:56:25.526416    3592 out.go:239] * 
	* 
	W0823 11:56:25.528792    3592 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 11:56:25.537018    3592 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-831000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000: exit status 7 (67.921167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-831000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-831000-m01 --driver=qemu2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-831000-m01 --driver=qemu2 : exit status 80 (10.186128791s)

                                                
                                                
-- stdout --
	* [multinode-831000-m01] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-831000-m01 in cluster multinode-831000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-831000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-831000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-831000-m02 --driver=qemu2 
E0823 11:56:36.670741    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-831000-m02 --driver=qemu2 : exit status 80 (10.075362125s)

                                                
                                                
-- stdout --
	* [multinode-831000-m02] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-831000-m02 in cluster multinode-831000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-831000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-831000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-831000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-831000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-831000: exit status 89 (79.327666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-831000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-831000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-831000 -n multinode-831000: exit status 7 (33.580708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.55s)

                                                
                                    
x
+
TestPreload (10.23s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-066000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-066000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.057468625s)

                                                
                                                
-- stdout --
	* [test-preload-066000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-066000 in cluster test-preload-066000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-066000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:56:46.321338    3647 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:56:46.321457    3647 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:46.321460    3647 out.go:309] Setting ErrFile to fd 2...
	I0823 11:56:46.321463    3647 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:56:46.321585    3647 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:56:46.322604    3647 out.go:303] Setting JSON to false
	I0823 11:56:46.337782    3647 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3380,"bootTime":1692813626,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:56:46.337845    3647 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:56:46.342415    3647 out.go:177] * [test-preload-066000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:56:46.349272    3647 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:56:46.349347    3647 notify.go:220] Checking for updates...
	I0823 11:56:46.356272    3647 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:56:46.359288    3647 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:56:46.362271    3647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:56:46.365159    3647 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:56:46.368260    3647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:56:46.371569    3647 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:56:46.371616    3647 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:56:46.376207    3647 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 11:56:46.383236    3647 start.go:298] selected driver: qemu2
	I0823 11:56:46.383241    3647 start.go:902] validating driver "qemu2" against <nil>
	I0823 11:56:46.383248    3647 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:56:46.385192    3647 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 11:56:46.388153    3647 out.go:177] * Automatically selected the socket_vmnet network
	I0823 11:56:46.391407    3647 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 11:56:46.391435    3647 cni.go:84] Creating CNI manager for ""
	I0823 11:56:46.391450    3647 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:56:46.391455    3647 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 11:56:46.391460    3647 start_flags.go:319] config:
	{Name:test-preload-066000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-066000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0}
	I0823 11:56:46.395489    3647 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:56:46.402253    3647 out.go:177] * Starting control plane node test-preload-066000 in cluster test-preload-066000
	I0823 11:56:46.406220    3647 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0823 11:56:46.406307    3647 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/test-preload-066000/config.json ...
	I0823 11:56:46.406326    3647 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/test-preload-066000/config.json: {Name:mk9c4c90dfc27a34ffd306b390f6ac9eeb84f7dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:56:46.406338    3647 cache.go:107] acquiring lock: {Name:mkd8a019f05b5c31f859ac085e05a97c2ef2a5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:56:46.406358    3647 cache.go:107] acquiring lock: {Name:mk2496aafe8bde5b2ca37d7732b64ea74bc4c510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:56:46.406712    3647 cache.go:107] acquiring lock: {Name:mk98764d830e4b7665a151c7aeaf4aaa052ca6ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:56:46.406371    3647 cache.go:107] acquiring lock: {Name:mk3c7cbab2f8e09b7f884b2e09c147d82999b288 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:56:46.406867    3647 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0823 11:56:46.406875    3647 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0823 11:56:46.406892    3647 cache.go:107] acquiring lock: {Name:mk916df61288ba3bd3e80b21bc489cde8e325cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:56:46.406923    3647 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0823 11:56:46.406996    3647 cache.go:107] acquiring lock: {Name:mk12decb773e21702ac0f1a9303cf2615c1b38d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:56:46.407010    3647 cache.go:107] acquiring lock: {Name:mkf72a288adf1dd9e81ca9e2e61a8a7ad041e0f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:56:46.407025    3647 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0823 11:56:46.407070    3647 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 11:56:46.407102    3647 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0823 11:56:46.407127    3647 cache.go:107] acquiring lock: {Name:mkaad32155aa4f31670cd81908a3aae139d09217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:56:46.407088    3647 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0823 11:56:46.407229    3647 start.go:365] acquiring machines lock for test-preload-066000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:56:46.407240    3647 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0823 11:56:46.407260    3647 start.go:369] acquired machines lock for "test-preload-066000" in 23.667µs
	I0823 11:56:46.407271    3647 start.go:93] Provisioning new machine with config: &{Name:test-preload-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.24.4 ClusterName:test-preload-066000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:56:46.407332    3647 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:56:46.412241    3647 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 11:56:46.419607    3647 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0823 11:56:46.419633    3647 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0823 11:56:46.419664    3647 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0823 11:56:46.419740    3647 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0823 11:56:46.419752    3647 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0823 11:56:46.420180    3647 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0823 11:56:46.421158    3647 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0823 11:56:46.421173    3647 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0823 11:56:46.428525    3647 start.go:159] libmachine.API.Create for "test-preload-066000" (driver="qemu2")
	I0823 11:56:46.428535    3647 client.go:168] LocalClient.Create starting
	I0823 11:56:46.428603    3647 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:56:46.428634    3647 main.go:141] libmachine: Decoding PEM data...
	I0823 11:56:46.428645    3647 main.go:141] libmachine: Parsing certificate...
	I0823 11:56:46.428694    3647 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:56:46.428712    3647 main.go:141] libmachine: Decoding PEM data...
	I0823 11:56:46.428721    3647 main.go:141] libmachine: Parsing certificate...
	I0823 11:56:46.429032    3647 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:56:46.616070    3647 main.go:141] libmachine: Creating SSH key...
	I0823 11:56:46.765702    3647 main.go:141] libmachine: Creating Disk image...
	I0823 11:56:46.765714    3647 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:56:46.765875    3647 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/disk.qcow2
	I0823 11:56:46.775065    3647 main.go:141] libmachine: STDOUT: 
	I0823 11:56:46.775089    3647 main.go:141] libmachine: STDERR: 
	I0823 11:56:46.775172    3647 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/disk.qcow2 +20000M
	I0823 11:56:46.783507    3647 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:56:46.783525    3647 main.go:141] libmachine: STDERR: 
	I0823 11:56:46.783545    3647 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/disk.qcow2
	I0823 11:56:46.783557    3647 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:56:46.783599    3647 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:51:d2:4c:ba:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/disk.qcow2
	I0823 11:56:46.785361    3647 main.go:141] libmachine: STDOUT: 
	I0823 11:56:46.785379    3647 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:56:46.785401    3647 client.go:171] LocalClient.Create took 356.870458ms
	I0823 11:56:47.275124    3647 cache.go:162] opening:  /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0823 11:56:47.318243    3647 cache.go:162] opening:  /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0823 11:56:47.655134    3647 cache.go:162] opening:  /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0823 11:56:47.707295    3647 cache.go:162] opening:  /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0823 11:56:47.906064    3647 cache.go:162] opening:  /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0823 11:56:48.038076    3647 cache.go:157] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0823 11:56:48.038093    3647 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.631245833s
	I0823 11:56:48.038106    3647 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0823 11:56:48.123148    3647 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0823 11:56:48.123186    3647 cache.go:162] opening:  /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0823 11:56:48.383240    3647 cache.go:162] opening:  /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0823 11:56:48.785616    3647 start.go:128] duration metric: createHost completed in 2.378327083s
	I0823 11:56:48.785669    3647 start.go:83] releasing machines lock for "test-preload-066000", held for 2.378478459s
	W0823 11:56:48.785738    3647 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:56:48.796703    3647 out.go:177] * Deleting "test-preload-066000" in qemu2 ...
	W0823 11:56:48.818273    3647 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:56:48.818312    3647 start.go:687] Will try again in 5 seconds ...
	W0823 11:56:49.165946    3647 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0823 11:56:49.166063    3647 cache.go:162] opening:  /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0823 11:56:49.942732    3647 cache.go:157] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0823 11:56:49.942775    3647 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.536556125s
	I0823 11:56:49.942805    3647 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0823 11:56:50.145477    3647 cache.go:157] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0823 11:56:50.145525    3647 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.738638417s
	I0823 11:56:50.145554    3647 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0823 11:56:50.782817    3647 cache.go:157] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0823 11:56:50.782871    3647 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.375884583s
	I0823 11:56:50.782923    3647 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0823 11:56:51.668898    3647 cache.go:157] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0823 11:56:51.668945    3647 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.262247875s
	I0823 11:56:51.668971    3647 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0823 11:56:52.409054    3647 cache.go:157] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0823 11:56:52.409102    3647 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.002964042s
	I0823 11:56:52.409131    3647 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0823 11:56:53.035846    3647 cache.go:157] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0823 11:56:53.035892    3647 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.629756959s
	I0823 11:56:53.035921    3647 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0823 11:56:53.818389    3647 start.go:365] acquiring machines lock for test-preload-066000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 11:56:53.818884    3647 start.go:369] acquired machines lock for "test-preload-066000" in 385.292µs
	I0823 11:56:53.819000    3647 start.go:93] Provisioning new machine with config: &{Name:test-preload-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.24.4 ClusterName:test-preload-066000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 11:56:53.819272    3647 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 11:56:53.827817    3647 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 11:56:53.874499    3647 start.go:159] libmachine.API.Create for "test-preload-066000" (driver="qemu2")
	I0823 11:56:53.874548    3647 client.go:168] LocalClient.Create starting
	I0823 11:56:53.874645    3647 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 11:56:53.874701    3647 main.go:141] libmachine: Decoding PEM data...
	I0823 11:56:53.874721    3647 main.go:141] libmachine: Parsing certificate...
	I0823 11:56:53.874783    3647 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 11:56:53.874822    3647 main.go:141] libmachine: Decoding PEM data...
	I0823 11:56:53.874838    3647 main.go:141] libmachine: Parsing certificate...
	I0823 11:56:53.875340    3647 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 11:56:54.197800    3647 main.go:141] libmachine: Creating SSH key...
	I0823 11:56:54.290801    3647 main.go:141] libmachine: Creating Disk image...
	I0823 11:56:54.290810    3647 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 11:56:54.290942    3647 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/disk.qcow2
	I0823 11:56:54.299427    3647 main.go:141] libmachine: STDOUT: 
	I0823 11:56:54.299441    3647 main.go:141] libmachine: STDERR: 
	I0823 11:56:54.299499    3647 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/disk.qcow2 +20000M
	I0823 11:56:54.306599    3647 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 11:56:54.306612    3647 main.go:141] libmachine: STDERR: 
	I0823 11:56:54.306624    3647 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/disk.qcow2
	I0823 11:56:54.306630    3647 main.go:141] libmachine: Starting QEMU VM...
	I0823 11:56:54.306683    3647 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:b9:68:d5:0c:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/test-preload-066000/disk.qcow2
	I0823 11:56:54.308162    3647 main.go:141] libmachine: STDOUT: 
	I0823 11:56:54.308182    3647 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 11:56:54.308194    3647 client.go:171] LocalClient.Create took 433.65575ms
	I0823 11:56:55.047735    3647 cache.go:157] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0823 11:56:55.047793    3647 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.641672334s
	I0823 11:56:55.047819    3647 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0823 11:56:55.047880    3647 cache.go:87] Successfully saved all images to host disk.
	I0823 11:56:56.308587    3647 start.go:128] duration metric: createHost completed in 2.489324s
	I0823 11:56:56.308637    3647 start.go:83] releasing machines lock for "test-preload-066000", held for 2.489815917s
	W0823 11:56:56.308963    3647 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-066000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-066000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 11:56:56.318514    3647 out.go:177] 
	W0823 11:56:56.322598    3647 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 11:56:56.322627    3647 out.go:239] * 
	* 
	W0823 11:56:56.328978    3647 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 11:56:56.336496    3647 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-066000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-08-23 11:56:56.353752 -0700 PDT m=+2734.604186126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-066000 -n test-preload-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-066000 -n test-preload-066000: exit status 7 (68.040708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-066000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-066000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-066000
--- FAIL: TestPreload (10.23s)

                                                
                                    
x
+
TestScheduledStopUnix (10.65s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-811000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-811000 --memory=2048 --driver=qemu2 : exit status 80 (10.484070667s)

                                                
                                                
-- stdout --
	* [scheduled-stop-811000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-811000 in cluster scheduled-stop-811000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-811000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-811000 in cluster scheduled-stop-811000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-08-23 11:57:07.00552 -0700 PDT m=+2745.256316542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-811000 -n scheduled-stop-811000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-811000 -n scheduled-stop-811000: exit status 7 (66.9355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-811000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-811000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-811000
--- FAIL: TestScheduledStopUnix (10.65s)

                                                
                                    
x
+
TestSkaffold (13.44s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1809078029 version
skaffold_test.go:63: skaffold version: v2.6.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-458000 --memory=2600 --driver=qemu2 
E0823 11:57:17.632203    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-458000 --memory=2600 --driver=qemu2 : exit status 80 (9.753411583s)

                                                
                                                
-- stdout --
	* [skaffold-458000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-458000 in cluster skaffold-458000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-458000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-458000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-458000 in cluster skaffold-458000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-458000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-08-23 11:57:20.44625 -0700 PDT m=+2758.697503917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-458000 -n skaffold-458000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-458000 -n skaffold-458000: exit status 7 (64.624542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-458000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-458000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-458000
--- FAIL: TestSkaffold (13.44s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (123.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:106: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-08-23 12:00:04.476466 -0700 PDT m=+2922.733299501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-608000 -n running-upgrade-608000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-608000 -n running-upgrade-608000: exit status 85 (85.575208ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-608000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-608000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-608000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-608000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-608000\"")
helpers_test.go:175: Cleaning up "running-upgrade-608000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-608000
--- FAIL: TestRunningBinaryUpgrade (123.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-563000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-563000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.904252333s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-563000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-563000 in cluster kubernetes-upgrade-563000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-563000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:00:04.875679    4134 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:00:04.875786    4134 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:00:04.875789    4134 out.go:309] Setting ErrFile to fd 2...
	I0823 12:00:04.875791    4134 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:00:04.875908    4134 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:00:04.876899    4134 out.go:303] Setting JSON to false
	I0823 12:00:04.892052    4134 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3578,"bootTime":1692813626,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:00:04.892125    4134 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:00:04.896073    4134 out.go:177] * [kubernetes-upgrade-563000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:00:04.902916    4134 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:00:04.906907    4134 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:00:04.902980    4134 notify.go:220] Checking for updates...
	I0823 12:00:04.912878    4134 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:00:04.915938    4134 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:00:04.918864    4134 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:00:04.921841    4134 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:00:04.925260    4134 config.go:182] Loaded profile config "cert-expiration-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:00:04.925321    4134 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:00:04.925367    4134 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:00:04.928848    4134 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:00:04.935901    4134 start.go:298] selected driver: qemu2
	I0823 12:00:04.935913    4134 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:00:04.935924    4134 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:00:04.937890    4134 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 12:00:04.939119    4134 out.go:177] * Automatically selected the socket_vmnet network
	I0823 12:00:04.941936    4134 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0823 12:00:04.941951    4134 cni.go:84] Creating CNI manager for ""
	I0823 12:00:04.941965    4134 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0823 12:00:04.941969    4134 start_flags.go:319] config:
	{Name:kubernetes-upgrade-563000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-563000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:00:04.946155    4134 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:00:04.952873    4134 out.go:177] * Starting control plane node kubernetes-upgrade-563000 in cluster kubernetes-upgrade-563000
	I0823 12:00:04.956896    4134 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0823 12:00:04.956917    4134 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0823 12:00:04.956937    4134 cache.go:57] Caching tarball of preloaded images
	I0823 12:00:04.957023    4134 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:00:04.957029    4134 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0823 12:00:04.957092    4134 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/kubernetes-upgrade-563000/config.json ...
	I0823 12:00:04.957112    4134 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/kubernetes-upgrade-563000/config.json: {Name:mkbcb5d1a03aedc6bf7ebdf6f82de7b3b4ad9a5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:00:04.957322    4134 start.go:365] acquiring machines lock for kubernetes-upgrade-563000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:00:04.957359    4134 start.go:369] acquired machines lock for "kubernetes-upgrade-563000" in 26.542µs
	I0823 12:00:04.957370    4134 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-563000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-563000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:00:04.957408    4134 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:00:04.961894    4134 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 12:00:04.977613    4134 start.go:159] libmachine.API.Create for "kubernetes-upgrade-563000" (driver="qemu2")
	I0823 12:00:04.977639    4134 client.go:168] LocalClient.Create starting
	I0823 12:00:04.977710    4134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:00:04.977734    4134 main.go:141] libmachine: Decoding PEM data...
	I0823 12:00:04.977752    4134 main.go:141] libmachine: Parsing certificate...
	I0823 12:00:04.977791    4134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:00:04.977810    4134 main.go:141] libmachine: Decoding PEM data...
	I0823 12:00:04.977826    4134 main.go:141] libmachine: Parsing certificate...
	I0823 12:00:04.978148    4134 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:00:05.092858    4134 main.go:141] libmachine: Creating SSH key...
	I0823 12:00:05.327216    4134 main.go:141] libmachine: Creating Disk image...
	I0823 12:00:05.327225    4134 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:00:05.327409    4134 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/disk.qcow2
	I0823 12:00:05.336711    4134 main.go:141] libmachine: STDOUT: 
	I0823 12:00:05.336732    4134 main.go:141] libmachine: STDERR: 
	I0823 12:00:05.336801    4134 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/disk.qcow2 +20000M
	I0823 12:00:05.344145    4134 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:00:05.344158    4134 main.go:141] libmachine: STDERR: 
	I0823 12:00:05.344179    4134 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/disk.qcow2
	I0823 12:00:05.344190    4134 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:00:05.344226    4134 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:8f:72:a0:e0:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/disk.qcow2
	I0823 12:00:05.345651    4134 main.go:141] libmachine: STDOUT: 
	I0823 12:00:05.345662    4134 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:00:05.345685    4134 client.go:171] LocalClient.Create took 368.050709ms
	I0823 12:00:07.347850    4134 start.go:128] duration metric: createHost completed in 2.390491209s
	I0823 12:00:07.347921    4134 start.go:83] releasing machines lock for "kubernetes-upgrade-563000", held for 2.390634291s
	W0823 12:00:07.347986    4134 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:00:07.355410    4134 out.go:177] * Deleting "kubernetes-upgrade-563000" in qemu2 ...
	W0823 12:00:07.375866    4134 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:00:07.375894    4134 start.go:687] Will try again in 5 seconds ...
	I0823 12:00:12.377977    4134 start.go:365] acquiring machines lock for kubernetes-upgrade-563000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:00:12.378470    4134 start.go:369] acquired machines lock for "kubernetes-upgrade-563000" in 367.959µs
	I0823 12:00:12.378612    4134 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-563000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-563000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:00:12.378924    4134 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:00:12.383565    4134 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 12:00:12.431378    4134 start.go:159] libmachine.API.Create for "kubernetes-upgrade-563000" (driver="qemu2")
	I0823 12:00:12.431434    4134 client.go:168] LocalClient.Create starting
	I0823 12:00:12.431556    4134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:00:12.431625    4134 main.go:141] libmachine: Decoding PEM data...
	I0823 12:00:12.431656    4134 main.go:141] libmachine: Parsing certificate...
	I0823 12:00:12.431733    4134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:00:12.431775    4134 main.go:141] libmachine: Decoding PEM data...
	I0823 12:00:12.431793    4134 main.go:141] libmachine: Parsing certificate...
	I0823 12:00:12.432413    4134 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:00:12.566789    4134 main.go:141] libmachine: Creating SSH key...
	I0823 12:00:12.692014    4134 main.go:141] libmachine: Creating Disk image...
	I0823 12:00:12.692021    4134 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:00:12.692158    4134 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/disk.qcow2
	I0823 12:00:12.700530    4134 main.go:141] libmachine: STDOUT: 
	I0823 12:00:12.700545    4134 main.go:141] libmachine: STDERR: 
	I0823 12:00:12.700601    4134 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/disk.qcow2 +20000M
	I0823 12:00:12.707681    4134 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:00:12.707693    4134 main.go:141] libmachine: STDERR: 
	I0823 12:00:12.707708    4134 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/disk.qcow2
	I0823 12:00:12.707717    4134 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:00:12.707757    4134 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:6c:1d:e1:c2:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/disk.qcow2
	I0823 12:00:12.709201    4134 main.go:141] libmachine: STDOUT: 
	I0823 12:00:12.709213    4134 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:00:12.709225    4134 client.go:171] LocalClient.Create took 277.792417ms
	I0823 12:00:14.711318    4134 start.go:128] duration metric: createHost completed in 2.332426875s
	I0823 12:00:14.711386    4134 start.go:83] releasing machines lock for "kubernetes-upgrade-563000", held for 2.332967875s
	W0823 12:00:14.711803    4134 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-563000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-563000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:00:14.723798    4134 out.go:177] 
	W0823 12:00:14.727989    4134 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:00:14.728018    4134 out.go:239] * 
	* 
	W0823 12:00:14.729726    4134 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:00:14.740867    4134 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-563000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-563000
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-563000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-563000 status --format={{.Host}}: exit status 7 (40.222333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-563000 --memory=2200 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-563000 --memory=2200 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.168741916s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-563000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-563000 in cluster kubernetes-upgrade-563000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-563000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-563000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:00:14.921122    4157 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:00:14.921235    4157 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:00:14.921238    4157 out.go:309] Setting ErrFile to fd 2...
	I0823 12:00:14.921241    4157 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:00:14.921349    4157 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:00:14.922300    4157 out.go:303] Setting JSON to false
	I0823 12:00:14.937312    4157 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3588,"bootTime":1692813626,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:00:14.937380    4157 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:00:14.941087    4157 out.go:177] * [kubernetes-upgrade-563000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:00:14.944022    4157 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:00:14.944098    4157 notify.go:220] Checking for updates...
	I0823 12:00:14.947990    4157 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:00:14.951824    4157 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:00:14.954940    4157 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:00:14.957973    4157 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:00:14.960971    4157 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:00:14.964263    4157 config.go:182] Loaded profile config "kubernetes-upgrade-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0823 12:00:14.964489    4157 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:00:14.968965    4157 out.go:177] * Using the qemu2 driver based on existing profile
	I0823 12:00:14.975958    4157 start.go:298] selected driver: qemu2
	I0823 12:00:14.975965    4157 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-563000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.16.0 ClusterName:kubernetes-upgrade-563000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:00:14.976030    4157 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:00:14.978872    4157 cni.go:84] Creating CNI manager for ""
	I0823 12:00:14.978902    4157 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 12:00:14.978909    4157 start_flags.go:319] config:
	{Name:kubernetes-upgrade-563000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-563000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:00:14.982996    4157 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:00:14.988927    4157 out.go:177] * Starting control plane node kubernetes-upgrade-563000 in cluster kubernetes-upgrade-563000
	I0823 12:00:14.992923    4157 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:00:14.992942    4157 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:00:14.992955    4157 cache.go:57] Caching tarball of preloaded images
	I0823 12:00:14.993013    4157 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:00:14.993020    4157 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:00:14.993101    4157 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/kubernetes-upgrade-563000/config.json ...
	I0823 12:00:14.993450    4157 start.go:365] acquiring machines lock for kubernetes-upgrade-563000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:00:14.993476    4157 start.go:369] acquired machines lock for "kubernetes-upgrade-563000" in 19.459µs
	I0823 12:00:14.993485    4157 start.go:96] Skipping create...Using existing machine configuration
	I0823 12:00:14.993488    4157 fix.go:54] fixHost starting: 
	I0823 12:00:14.993611    4157 fix.go:102] recreateIfNeeded on kubernetes-upgrade-563000: state=Stopped err=<nil>
	W0823 12:00:14.993619    4157 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 12:00:14.996932    4157 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-563000" ...
	I0823 12:00:15.004992    4157 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:6c:1d:e1:c2:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/disk.qcow2
	I0823 12:00:15.006843    4157 main.go:141] libmachine: STDOUT: 
	I0823 12:00:15.006859    4157 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:00:15.006889    4157 fix.go:56] fixHost completed within 13.397708ms
	I0823 12:00:15.006894    4157 start.go:83] releasing machines lock for "kubernetes-upgrade-563000", held for 13.41475ms
	W0823 12:00:15.006901    4157 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:00:15.006932    4157 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:00:15.006936    4157 start.go:687] Will try again in 5 seconds ...
	I0823 12:00:20.008885    4157 start.go:365] acquiring machines lock for kubernetes-upgrade-563000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:00:20.009232    4157 start.go:369] acquired machines lock for "kubernetes-upgrade-563000" in 269.667µs
	I0823 12:00:20.009359    4157 start.go:96] Skipping create...Using existing machine configuration
	I0823 12:00:20.009380    4157 fix.go:54] fixHost starting: 
	I0823 12:00:20.010094    4157 fix.go:102] recreateIfNeeded on kubernetes-upgrade-563000: state=Stopped err=<nil>
	W0823 12:00:20.010118    4157 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 12:00:20.013874    4157 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-563000" ...
	I0823 12:00:20.017850    4157 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:6c:1d:e1:c2:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubernetes-upgrade-563000/disk.qcow2
	I0823 12:00:20.027035    4157 main.go:141] libmachine: STDOUT: 
	I0823 12:00:20.027168    4157 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:00:20.027245    4157 fix.go:56] fixHost completed within 17.865333ms
	I0823 12:00:20.027262    4157 start.go:83] releasing machines lock for "kubernetes-upgrade-563000", held for 18.008833ms
	W0823 12:00:20.027498    4157 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-563000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-563000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:00:20.035670    4157 out.go:177] 
	W0823 12:00:20.038905    4157 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:00:20.038966    4157 out.go:239] * 
	* 
	W0823 12:00:20.041780    4157 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:00:20.049695    4157 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:257: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-563000 --memory=2200 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-563000 version --output=json
version_upgrade_test.go:260: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-563000 version --output=json: exit status 1 (63.955042ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-563000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:262: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-08-23 12:00:20.127867 -0700 PDT m=+2938.385232584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-563000 -n kubernetes-upgrade-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-563000 -n kubernetes-upgrade-563000: exit status 7 (33.844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-563000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-563000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-563000
--- FAIL: TestKubernetesUpgrade (15.41s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.43s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17086
- KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4056117681/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.43s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.13s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17086
- KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current629925530/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (161.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0823 12:00:55.675410    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
version_upgrade_test.go:167: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (161.24s)

                                                
                                    
x
+
TestPause/serial/Start (9.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-030000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-030000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.836191s)

                                                
                                                
-- stdout --
	* [pause-030000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-030000 in cluster pause-030000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-030000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-030000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-030000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-030000 -n pause-030000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-030000 -n pause-030000: exit status 7 (69.256375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-030000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-831000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-831000 --driver=qemu2 : exit status 80 (9.825908375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-831000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-831000 in cluster NoKubernetes-831000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-831000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-831000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-831000 -n NoKubernetes-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-831000 -n NoKubernetes-831000: exit status 7 (67.847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-831000 --no-kubernetes --driver=qemu2 
E0823 12:01:23.388811    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/ingress-addon-legacy-537000/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-831000 --no-kubernetes --driver=qemu2 : exit status 80 (5.398796s)

                                                
                                                
-- stdout --
	* [NoKubernetes-831000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-831000
	* Restarting existing qemu2 VM for "NoKubernetes-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-831000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-831000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-831000 -n NoKubernetes-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-831000 -n NoKubernetes-831000: exit status 7 (69.317ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-831000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-831000 --no-kubernetes --driver=qemu2 : exit status 80 (5.401474292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-831000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-831000
	* Restarting existing qemu2 VM for "NoKubernetes-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-831000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-831000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-831000 -n NoKubernetes-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-831000 -n NoKubernetes-831000: exit status 7 (70.061291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-831000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-831000 --driver=qemu2 : exit status 80 (5.395383666s)

                                                
                                                
-- stdout --
	* [NoKubernetes-831000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-831000
	* Restarting existing qemu2 VM for "NoKubernetes-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-831000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-831000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-831000 -n NoKubernetes-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-831000 -n NoKubernetes-831000: exit status 7 (68.454292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.853391875s)

                                                
                                                
-- stdout --
	* [auto-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-953000 in cluster auto-953000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-953000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:01:38.022867    4279 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:01:38.023010    4279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:01:38.023012    4279 out.go:309] Setting ErrFile to fd 2...
	I0823 12:01:38.023015    4279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:01:38.023113    4279 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:01:38.024113    4279 out.go:303] Setting JSON to false
	I0823 12:01:38.039182    4279 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3672,"bootTime":1692813626,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:01:38.039257    4279 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:01:38.043202    4279 out.go:177] * [auto-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:01:38.050133    4279 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:01:38.050200    4279 notify.go:220] Checking for updates...
	I0823 12:01:38.054131    4279 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:01:38.055608    4279 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:01:38.059097    4279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:01:38.062136    4279 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:01:38.065137    4279 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:01:38.068429    4279 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:01:38.068470    4279 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:01:38.073120    4279 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:01:38.080095    4279 start.go:298] selected driver: qemu2
	I0823 12:01:38.080099    4279 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:01:38.080105    4279 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:01:38.081887    4279 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 12:01:38.085096    4279 out.go:177] * Automatically selected the socket_vmnet network
	I0823 12:01:38.088212    4279 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:01:38.088239    4279 cni.go:84] Creating CNI manager for ""
	I0823 12:01:38.088255    4279 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 12:01:38.088258    4279 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 12:01:38.088265    4279 start_flags.go:319] config:
	{Name:auto-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:auto-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}

                                                
                                                
	I0823 12:01:38.092551    4279 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:01:38.099133    4279 out.go:177] * Starting control plane node auto-953000 in cluster auto-953000
	I0823 12:01:38.102097    4279 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:01:38.102114    4279 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:01:38.102127    4279 cache.go:57] Caching tarball of preloaded images
	I0823 12:01:38.102203    4279 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:01:38.102208    4279 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:01:38.102269    4279 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/auto-953000/config.json ...
	I0823 12:01:38.102281    4279 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/auto-953000/config.json: {Name:mkc6cff24b5da3e60e258118bc24510aa5690f1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:01:38.102483    4279 start.go:365] acquiring machines lock for auto-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:01:38.102512    4279 start.go:369] acquired machines lock for "auto-953000" in 23.833µs
	I0823 12:01:38.102523    4279 start.go:93] Provisioning new machine with config: &{Name:auto-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.0 ClusterName:auto-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:01:38.102554    4279 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:01:38.107153    4279 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:01:38.123139    4279 start.go:159] libmachine.API.Create for "auto-953000" (driver="qemu2")
	I0823 12:01:38.123164    4279 client.go:168] LocalClient.Create starting
	I0823 12:01:38.123211    4279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:01:38.123235    4279 main.go:141] libmachine: Decoding PEM data...
	I0823 12:01:38.123244    4279 main.go:141] libmachine: Parsing certificate...
	I0823 12:01:38.123288    4279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:01:38.123306    4279 main.go:141] libmachine: Decoding PEM data...
	I0823 12:01:38.123315    4279 main.go:141] libmachine: Parsing certificate...
	I0823 12:01:38.123669    4279 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:01:38.249839    4279 main.go:141] libmachine: Creating SSH key...
	I0823 12:01:38.400710    4279 main.go:141] libmachine: Creating Disk image...
	I0823 12:01:38.400716    4279 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:01:38.400873    4279 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/disk.qcow2
	I0823 12:01:38.409656    4279 main.go:141] libmachine: STDOUT: 
	I0823 12:01:38.409672    4279 main.go:141] libmachine: STDERR: 
	I0823 12:01:38.409721    4279 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/disk.qcow2 +20000M
	I0823 12:01:38.416852    4279 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:01:38.416864    4279 main.go:141] libmachine: STDERR: 
	I0823 12:01:38.416879    4279 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/disk.qcow2
	I0823 12:01:38.416886    4279 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:01:38.416928    4279 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:e0:c9:11:ac:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/disk.qcow2
	I0823 12:01:38.418390    4279 main.go:141] libmachine: STDOUT: 
	I0823 12:01:38.418405    4279 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:01:38.418425    4279 client.go:171] LocalClient.Create took 295.264292ms
	I0823 12:01:40.420508    4279 start.go:128] duration metric: createHost completed in 2.318015042s
	I0823 12:01:40.420571    4279 start.go:83] releasing machines lock for "auto-953000", held for 2.318128792s
	W0823 12:01:40.420655    4279 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:01:40.432972    4279 out.go:177] * Deleting "auto-953000" in qemu2 ...
	W0823 12:01:40.453567    4279 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:01:40.453603    4279 start.go:687] Will try again in 5 seconds ...
	I0823 12:01:45.455568    4279 start.go:365] acquiring machines lock for auto-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:01:45.455897    4279 start.go:369] acquired machines lock for "auto-953000" in 259.125µs
	I0823 12:01:45.456006    4279 start.go:93] Provisioning new machine with config: &{Name:auto-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.0 ClusterName:auto-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:01:45.456188    4279 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:01:45.464789    4279 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:01:45.513090    4279 start.go:159] libmachine.API.Create for "auto-953000" (driver="qemu2")
	I0823 12:01:45.513137    4279 client.go:168] LocalClient.Create starting
	I0823 12:01:45.513327    4279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:01:45.513424    4279 main.go:141] libmachine: Decoding PEM data...
	I0823 12:01:45.513455    4279 main.go:141] libmachine: Parsing certificate...
	I0823 12:01:45.513543    4279 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:01:45.513590    4279 main.go:141] libmachine: Decoding PEM data...
	I0823 12:01:45.513605    4279 main.go:141] libmachine: Parsing certificate...
	I0823 12:01:45.514294    4279 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:01:45.637654    4279 main.go:141] libmachine: Creating SSH key...
	I0823 12:01:45.786819    4279 main.go:141] libmachine: Creating Disk image...
	I0823 12:01:45.786826    4279 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:01:45.786988    4279 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/disk.qcow2
	I0823 12:01:45.795738    4279 main.go:141] libmachine: STDOUT: 
	I0823 12:01:45.795752    4279 main.go:141] libmachine: STDERR: 
	I0823 12:01:45.795806    4279 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/disk.qcow2 +20000M
	I0823 12:01:45.803023    4279 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:01:45.803036    4279 main.go:141] libmachine: STDERR: 
	I0823 12:01:45.803060    4279 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/disk.qcow2
	I0823 12:01:45.803067    4279 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:01:45.803108    4279 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:af:e2:f3:d1:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/auto-953000/disk.qcow2
	I0823 12:01:45.804617    4279 main.go:141] libmachine: STDOUT: 
	I0823 12:01:45.804633    4279 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:01:45.804647    4279 client.go:171] LocalClient.Create took 291.514375ms
	I0823 12:01:47.806747    4279 start.go:128] duration metric: createHost completed in 2.350618458s
	I0823 12:01:47.806804    4279 start.go:83] releasing machines lock for "auto-953000", held for 2.350971416s
	W0823 12:01:47.807248    4279 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:01:47.818883    4279 out.go:177] 
	W0823 12:01:47.822901    4279 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:01:47.822924    4279 out.go:239] * 
	* 
	W0823 12:01:47.825115    4279 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:01:47.833838    4279 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.623136458s)

                                                
                                                
-- stdout --
	* [calico-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-953000 in cluster calico-953000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-953000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:01:49.997765    4389 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:01:49.997870    4389 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:01:49.997873    4389 out.go:309] Setting ErrFile to fd 2...
	I0823 12:01:49.997875    4389 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:01:49.998026    4389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:01:49.999067    4389 out.go:303] Setting JSON to false
	I0823 12:01:50.014111    4389 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3684,"bootTime":1692813626,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:01:50.014181    4389 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:01:50.016832    4389 out.go:177] * [calico-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:01:50.024148    4389 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:01:50.027091    4389 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:01:50.024218    4389 notify.go:220] Checking for updates...
	I0823 12:01:50.033066    4389 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:01:50.034474    4389 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:01:50.037061    4389 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:01:50.040132    4389 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:01:50.043474    4389 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:01:50.043518    4389 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:01:50.048064    4389 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:01:50.055132    4389 start.go:298] selected driver: qemu2
	I0823 12:01:50.055137    4389 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:01:50.055144    4389 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:01:50.057015    4389 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 12:01:50.060046    4389 out.go:177] * Automatically selected the socket_vmnet network
	I0823 12:01:50.063250    4389 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:01:50.063272    4389 cni.go:84] Creating CNI manager for "calico"
	I0823 12:01:50.063278    4389 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0823 12:01:50.063284    4389 start_flags.go:319] config:
	{Name:calico-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:calico-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0823 12:01:50.067437    4389 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:01:50.075105    4389 out.go:177] * Starting control plane node calico-953000 in cluster calico-953000
	I0823 12:01:50.079109    4389 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:01:50.079131    4389 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:01:50.079157    4389 cache.go:57] Caching tarball of preloaded images
	I0823 12:01:50.079231    4389 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:01:50.079237    4389 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:01:50.079317    4389 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/calico-953000/config.json ...
	I0823 12:01:50.079329    4389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/calico-953000/config.json: {Name:mk21c3a42b5663fd6f65e419a7642aeba941ea5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:01:50.079560    4389 start.go:365] acquiring machines lock for calico-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:01:50.079605    4389 start.go:369] acquired machines lock for "calico-953000" in 32.291µs
	I0823 12:01:50.079618    4389 start.go:93] Provisioning new machine with config: &{Name:calico-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.28.0 ClusterName:calico-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:01:50.079658    4389 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:01:50.084070    4389 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:01:50.100066    4389 start.go:159] libmachine.API.Create for "calico-953000" (driver="qemu2")
	I0823 12:01:50.100087    4389 client.go:168] LocalClient.Create starting
	I0823 12:01:50.100137    4389 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:01:50.100162    4389 main.go:141] libmachine: Decoding PEM data...
	I0823 12:01:50.100186    4389 main.go:141] libmachine: Parsing certificate...
	I0823 12:01:50.100230    4389 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:01:50.100249    4389 main.go:141] libmachine: Decoding PEM data...
	I0823 12:01:50.100261    4389 main.go:141] libmachine: Parsing certificate...
	I0823 12:01:50.100575    4389 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:01:50.212289    4389 main.go:141] libmachine: Creating SSH key...
	I0823 12:01:50.254922    4389 main.go:141] libmachine: Creating Disk image...
	I0823 12:01:50.254927    4389 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:01:50.255068    4389 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/disk.qcow2
	I0823 12:01:50.264210    4389 main.go:141] libmachine: STDOUT: 
	I0823 12:01:50.264225    4389 main.go:141] libmachine: STDERR: 
	I0823 12:01:50.264270    4389 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/disk.qcow2 +20000M
	I0823 12:01:50.271624    4389 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:01:50.271636    4389 main.go:141] libmachine: STDERR: 
	I0823 12:01:50.271648    4389 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/disk.qcow2
	I0823 12:01:50.271657    4389 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:01:50.271697    4389 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:14:45:f3:32:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/disk.qcow2
	I0823 12:01:50.273353    4389 main.go:141] libmachine: STDOUT: 
	I0823 12:01:50.273374    4389 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:01:50.273398    4389 client.go:171] LocalClient.Create took 173.308458ms
	I0823 12:01:52.275485    4389 start.go:128] duration metric: createHost completed in 2.195884125s
	I0823 12:01:52.275577    4389 start.go:83] releasing machines lock for "calico-953000", held for 2.196003417s
	W0823 12:01:52.275647    4389 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:01:52.284755    4389 out.go:177] * Deleting "calico-953000" in qemu2 ...
	W0823 12:01:52.305165    4389 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:01:52.305221    4389 start.go:687] Will try again in 5 seconds ...
	I0823 12:01:57.307335    4389 start.go:365] acquiring machines lock for calico-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:01:57.307970    4389 start.go:369] acquired machines lock for "calico-953000" in 494.5µs
	I0823 12:01:57.308105    4389 start.go:93] Provisioning new machine with config: &{Name:calico-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.28.0 ClusterName:calico-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:01:57.308446    4389 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:01:57.318088    4389 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:01:57.365922    4389 start.go:159] libmachine.API.Create for "calico-953000" (driver="qemu2")
	I0823 12:01:57.365967    4389 client.go:168] LocalClient.Create starting
	I0823 12:01:57.366086    4389 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:01:57.366146    4389 main.go:141] libmachine: Decoding PEM data...
	I0823 12:01:57.366167    4389 main.go:141] libmachine: Parsing certificate...
	I0823 12:01:57.366244    4389 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:01:57.366280    4389 main.go:141] libmachine: Decoding PEM data...
	I0823 12:01:57.366290    4389 main.go:141] libmachine: Parsing certificate...
	I0823 12:01:57.366853    4389 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:01:57.493440    4389 main.go:141] libmachine: Creating SSH key...
	I0823 12:01:57.533154    4389 main.go:141] libmachine: Creating Disk image...
	I0823 12:01:57.533159    4389 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:01:57.533311    4389 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/disk.qcow2
	I0823 12:01:57.541896    4389 main.go:141] libmachine: STDOUT: 
	I0823 12:01:57.541918    4389 main.go:141] libmachine: STDERR: 
	I0823 12:01:57.541978    4389 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/disk.qcow2 +20000M
	I0823 12:01:57.549089    4389 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:01:57.549099    4389 main.go:141] libmachine: STDERR: 
	I0823 12:01:57.549111    4389 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/disk.qcow2
	I0823 12:01:57.549117    4389 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:01:57.549158    4389 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:dc:d6:96:f1:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/calico-953000/disk.qcow2
	I0823 12:01:57.550704    4389 main.go:141] libmachine: STDOUT: 
	I0823 12:01:57.550715    4389 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:01:57.550731    4389 client.go:171] LocalClient.Create took 184.765333ms
	I0823 12:01:59.552820    4389 start.go:128] duration metric: createHost completed in 2.244422958s
	I0823 12:01:59.552886    4389 start.go:83] releasing machines lock for "calico-953000", held for 2.244965542s
	W0823 12:01:59.553336    4389 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:01:59.562978    4389 out.go:177] 
	W0823 12:01:59.567134    4389 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:01:59.567184    4389 out.go:239] * 
	* 
	W0823 12:01:59.570009    4389 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:01:59.578858    4389 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.725177417s)

                                                
                                                
-- stdout --
	* [custom-flannel-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-953000 in cluster custom-flannel-953000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-953000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:02:01.925483    4510 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:02:01.925593    4510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:02:01.925597    4510 out.go:309] Setting ErrFile to fd 2...
	I0823 12:02:01.925600    4510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:02:01.925718    4510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:02:01.926732    4510 out.go:303] Setting JSON to false
	I0823 12:02:01.941807    4510 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3695,"bootTime":1692813626,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:02:01.941892    4510 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:02:01.947227    4510 out.go:177] * [custom-flannel-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:02:01.954174    4510 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:02:01.957991    4510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:02:01.954243    4510 notify.go:220] Checking for updates...
	I0823 12:02:01.961173    4510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:02:01.964169    4510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:02:01.967168    4510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:02:01.970122    4510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:02:01.973541    4510 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:02:01.973590    4510 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:02:01.978170    4510 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:02:01.985151    4510 start.go:298] selected driver: qemu2
	I0823 12:02:01.985155    4510 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:02:01.985161    4510 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:02:01.987102    4510 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 12:02:01.990163    4510 out.go:177] * Automatically selected the socket_vmnet network
	I0823 12:02:01.993219    4510 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:02:01.993239    4510 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0823 12:02:01.993250    4510 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0823 12:02:01.993256    4510 start_flags.go:319] config:
	{Name:custom-flannel-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:custom-flannel-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:02:01.997342    4510 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:02:02.004018    4510 out.go:177] * Starting control plane node custom-flannel-953000 in cluster custom-flannel-953000
	I0823 12:02:02.008137    4510 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:02:02.008155    4510 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:02:02.008169    4510 cache.go:57] Caching tarball of preloaded images
	I0823 12:02:02.008237    4510 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:02:02.008242    4510 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:02:02.008335    4510 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/custom-flannel-953000/config.json ...
	I0823 12:02:02.008348    4510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/custom-flannel-953000/config.json: {Name:mk8419d0c9e536aefac192c787bfc631bbbac742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:02:02.008572    4510 start.go:365] acquiring machines lock for custom-flannel-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:02:02.008604    4510 start.go:369] acquired machines lock for "custom-flannel-953000" in 25.958µs
	I0823 12:02:02.008615    4510 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.0 ClusterName:custom-flannel-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:02:02.008644    4510 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:02:02.016150    4510 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:02:02.032351    4510 start.go:159] libmachine.API.Create for "custom-flannel-953000" (driver="qemu2")
	I0823 12:02:02.032375    4510 client.go:168] LocalClient.Create starting
	I0823 12:02:02.032432    4510 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:02:02.032462    4510 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:02.032471    4510 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:02.032497    4510 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:02:02.032520    4510 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:02.032527    4510 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:02.032864    4510 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:02:02.150751    4510 main.go:141] libmachine: Creating SSH key...
	I0823 12:02:02.257685    4510 main.go:141] libmachine: Creating Disk image...
	I0823 12:02:02.257692    4510 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:02:02.257827    4510 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/disk.qcow2
	I0823 12:02:02.270139    4510 main.go:141] libmachine: STDOUT: 
	I0823 12:02:02.270161    4510 main.go:141] libmachine: STDERR: 
	I0823 12:02:02.270224    4510 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/disk.qcow2 +20000M
	I0823 12:02:02.277385    4510 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:02:02.277397    4510 main.go:141] libmachine: STDERR: 
	I0823 12:02:02.277416    4510 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/disk.qcow2
	I0823 12:02:02.277428    4510 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:02:02.277464    4510 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:0a:0f:1d:62:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/disk.qcow2
	I0823 12:02:02.278907    4510 main.go:141] libmachine: STDOUT: 
	I0823 12:02:02.278926    4510 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:02:02.278946    4510 client.go:171] LocalClient.Create took 246.57225ms
	I0823 12:02:04.281030    4510 start.go:128] duration metric: createHost completed in 2.272446208s
	I0823 12:02:04.281092    4510 start.go:83] releasing machines lock for "custom-flannel-953000", held for 2.272556375s
	W0823 12:02:04.281174    4510 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:02:04.289486    4510 out.go:177] * Deleting "custom-flannel-953000" in qemu2 ...
	W0823 12:02:04.313879    4510 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:02:04.313908    4510 start.go:687] Will try again in 5 seconds ...
	I0823 12:02:09.315998    4510 start.go:365] acquiring machines lock for custom-flannel-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:02:09.316678    4510 start.go:369] acquired machines lock for "custom-flannel-953000" in 534.709µs
	I0823 12:02:09.316823    4510 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.0 ClusterName:custom-flannel-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:02:09.317145    4510 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:02:09.326897    4510 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:02:09.375705    4510 start.go:159] libmachine.API.Create for "custom-flannel-953000" (driver="qemu2")
	I0823 12:02:09.375746    4510 client.go:168] LocalClient.Create starting
	I0823 12:02:09.375865    4510 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:02:09.375918    4510 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:09.375938    4510 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:09.376019    4510 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:02:09.376054    4510 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:09.376072    4510 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:09.376573    4510 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:02:09.510708    4510 main.go:141] libmachine: Creating SSH key...
	I0823 12:02:09.562269    4510 main.go:141] libmachine: Creating Disk image...
	I0823 12:02:09.562274    4510 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:02:09.562416    4510 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/disk.qcow2
	I0823 12:02:09.570899    4510 main.go:141] libmachine: STDOUT: 
	I0823 12:02:09.570912    4510 main.go:141] libmachine: STDERR: 
	I0823 12:02:09.570965    4510 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/disk.qcow2 +20000M
	I0823 12:02:09.578011    4510 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:02:09.578023    4510 main.go:141] libmachine: STDERR: 
	I0823 12:02:09.578036    4510 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/disk.qcow2
	I0823 12:02:09.578040    4510 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:02:09.578084    4510 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:ca:5f:19:88:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/custom-flannel-953000/disk.qcow2
	I0823 12:02:09.579578    4510 main.go:141] libmachine: STDOUT: 
	I0823 12:02:09.579589    4510 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:02:09.579600    4510 client.go:171] LocalClient.Create took 203.856541ms
	I0823 12:02:11.581688    4510 start.go:128] duration metric: createHost completed in 2.264595875s
	I0823 12:02:11.581750    4510 start.go:83] releasing machines lock for "custom-flannel-953000", held for 2.265126084s
	W0823 12:02:11.582096    4510 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:02:11.591693    4510 out.go:177] 
	W0823 12:02:11.595834    4510 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:02:11.595868    4510 out.go:239] * 
	* 
	W0823 12:02:11.598748    4510 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:02:11.608767    4510 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.839333958s)

                                                
                                                
-- stdout --
	* [false-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-953000 in cluster false-953000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-953000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:02:13.972100    4628 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:02:13.972211    4628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:02:13.972214    4628 out.go:309] Setting ErrFile to fd 2...
	I0823 12:02:13.972217    4628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:02:13.972331    4628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:02:13.973315    4628 out.go:303] Setting JSON to false
	I0823 12:02:13.988489    4628 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3707,"bootTime":1692813626,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:02:13.988561    4628 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:02:13.993757    4628 out.go:177] * [false-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:02:14.000697    4628 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:02:14.004742    4628 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:02:14.000792    4628 notify.go:220] Checking for updates...
	I0823 12:02:14.007709    4628 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:02:14.010699    4628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:02:14.013682    4628 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:02:14.016637    4628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:02:14.020037    4628 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:02:14.020084    4628 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:02:14.024698    4628 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:02:14.031656    4628 start.go:298] selected driver: qemu2
	I0823 12:02:14.031661    4628 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:02:14.031667    4628 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:02:14.033597    4628 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 12:02:14.036661    4628 out.go:177] * Automatically selected the socket_vmnet network
	I0823 12:02:14.038193    4628 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:02:14.038230    4628 cni.go:84] Creating CNI manager for "false"
	I0823 12:02:14.038236    4628 start_flags.go:319] config:
	{Name:false-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:false-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0}
	I0823 12:02:14.042326    4628 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:02:14.049698    4628 out.go:177] * Starting control plane node false-953000 in cluster false-953000
	I0823 12:02:14.053632    4628 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:02:14.053655    4628 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:02:14.053670    4628 cache.go:57] Caching tarball of preloaded images
	I0823 12:02:14.053732    4628 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:02:14.053738    4628 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:02:14.053820    4628 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/false-953000/config.json ...
	I0823 12:02:14.053833    4628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/false-953000/config.json: {Name:mk889169ea9afa20723a374664645bf8853c87ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:02:14.054069    4628 start.go:365] acquiring machines lock for false-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:02:14.054100    4628 start.go:369] acquired machines lock for "false-953000" in 25.291µs
	I0823 12:02:14.054111    4628 start.go:93] Provisioning new machine with config: &{Name:false-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.0 ClusterName:false-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:02:14.054154    4628 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:02:14.058707    4628 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:02:14.074245    4628 start.go:159] libmachine.API.Create for "false-953000" (driver="qemu2")
	I0823 12:02:14.074269    4628 client.go:168] LocalClient.Create starting
	I0823 12:02:14.074322    4628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:02:14.074349    4628 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:14.074363    4628 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:14.074402    4628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:02:14.074422    4628 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:14.074432    4628 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:14.074753    4628 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:02:14.199458    4628 main.go:141] libmachine: Creating SSH key...
	I0823 12:02:14.301854    4628 main.go:141] libmachine: Creating Disk image...
	I0823 12:02:14.301861    4628 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:02:14.302004    4628 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/disk.qcow2
	I0823 12:02:14.310477    4628 main.go:141] libmachine: STDOUT: 
	I0823 12:02:14.310496    4628 main.go:141] libmachine: STDERR: 
	I0823 12:02:14.310550    4628 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/disk.qcow2 +20000M
	I0823 12:02:14.317678    4628 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:02:14.317688    4628 main.go:141] libmachine: STDERR: 
	I0823 12:02:14.317704    4628 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/disk.qcow2
	I0823 12:02:14.317711    4628 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:02:14.317754    4628 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:30:49:be:ee:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/disk.qcow2
	I0823 12:02:14.319251    4628 main.go:141] libmachine: STDOUT: 
	I0823 12:02:14.319267    4628 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:02:14.319285    4628 client.go:171] LocalClient.Create took 245.016375ms
	I0823 12:02:16.321408    4628 start.go:128] duration metric: createHost completed in 2.267306s
	I0823 12:02:16.321496    4628 start.go:83] releasing machines lock for "false-953000", held for 2.267460875s
	W0823 12:02:16.321620    4628 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:02:16.330030    4628 out.go:177] * Deleting "false-953000" in qemu2 ...
	W0823 12:02:16.350021    4628 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:02:16.350046    4628 start.go:687] Will try again in 5 seconds ...
	I0823 12:02:21.352194    4628 start.go:365] acquiring machines lock for false-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:02:21.352803    4628 start.go:369] acquired machines lock for "false-953000" in 484.375µs
	I0823 12:02:21.352942    4628 start.go:93] Provisioning new machine with config: &{Name:false-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.0 ClusterName:false-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:02:21.353280    4628 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:02:21.362913    4628 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:02:21.410824    4628 start.go:159] libmachine.API.Create for "false-953000" (driver="qemu2")
	I0823 12:02:21.410873    4628 client.go:168] LocalClient.Create starting
	I0823 12:02:21.410989    4628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:02:21.411049    4628 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:21.411068    4628 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:21.411140    4628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:02:21.411175    4628 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:21.411192    4628 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:21.411719    4628 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:02:21.563435    4628 main.go:141] libmachine: Creating SSH key...
	I0823 12:02:21.724848    4628 main.go:141] libmachine: Creating Disk image...
	I0823 12:02:21.724856    4628 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:02:21.725020    4628 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/disk.qcow2
	I0823 12:02:21.735047    4628 main.go:141] libmachine: STDOUT: 
	I0823 12:02:21.735063    4628 main.go:141] libmachine: STDERR: 
	I0823 12:02:21.735150    4628 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/disk.qcow2 +20000M
	I0823 12:02:21.742475    4628 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:02:21.742491    4628 main.go:141] libmachine: STDERR: 
	I0823 12:02:21.742502    4628 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/disk.qcow2
	I0823 12:02:21.742511    4628 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:02:21.742547    4628 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:b6:a4:9d:20:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/false-953000/disk.qcow2
	I0823 12:02:21.744089    4628 main.go:141] libmachine: STDOUT: 
	I0823 12:02:21.744106    4628 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:02:21.744117    4628 client.go:171] LocalClient.Create took 333.24875ms
	I0823 12:02:23.746203    4628 start.go:128] duration metric: createHost completed in 2.392973459s
	I0823 12:02:23.746325    4628 start.go:83] releasing machines lock for "false-953000", held for 2.393545417s
	W0823 12:02:23.746740    4628 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:02:23.755323    4628 out.go:177] 
	W0823 12:02:23.759410    4628 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:02:23.759469    4628 out.go:239] * 
	* 
	W0823 12:02:23.761975    4628 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:02:23.769258    4628 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.667076333s)

                                                
                                                
-- stdout --
	* [kindnet-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-953000 in cluster kindnet-953000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-953000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:02:25.975357    4738 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:02:25.975459    4738 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:02:25.975463    4738 out.go:309] Setting ErrFile to fd 2...
	I0823 12:02:25.975465    4738 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:02:25.975571    4738 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:02:25.976582    4738 out.go:303] Setting JSON to false
	I0823 12:02:25.991682    4738 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3719,"bootTime":1692813626,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:02:25.991745    4738 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:02:25.995974    4738 out.go:177] * [kindnet-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:02:26.003057    4738 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:02:26.006949    4738 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:02:26.003130    4738 notify.go:220] Checking for updates...
	I0823 12:02:26.012933    4738 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:02:26.015943    4738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:02:26.018981    4738 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:02:26.021991    4738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:02:26.025270    4738 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:02:26.025312    4738 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:02:26.029936    4738 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:02:26.036931    4738 start.go:298] selected driver: qemu2
	I0823 12:02:26.036936    4738 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:02:26.036943    4738 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:02:26.038894    4738 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 12:02:26.041898    4738 out.go:177] * Automatically selected the socket_vmnet network
	I0823 12:02:26.045058    4738 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:02:26.045083    4738 cni.go:84] Creating CNI manager for "kindnet"
	I0823 12:02:26.045088    4738 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0823 12:02:26.045094    4738 start_flags.go:319] config:
	{Name:kindnet-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kindnet-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0}
	I0823 12:02:26.049253    4738 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:02:26.052987    4738 out.go:177] * Starting control plane node kindnet-953000 in cluster kindnet-953000
	I0823 12:02:26.060951    4738 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:02:26.060972    4738 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:02:26.060982    4738 cache.go:57] Caching tarball of preloaded images
	I0823 12:02:26.061052    4738 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:02:26.061058    4738 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:02:26.061122    4738 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/kindnet-953000/config.json ...
	I0823 12:02:26.061136    4738 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/kindnet-953000/config.json: {Name:mk8a3e559e050feba895d0073f90077611223968 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:02:26.061357    4738 start.go:365] acquiring machines lock for kindnet-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:02:26.061388    4738 start.go:369] acquired machines lock for "kindnet-953000" in 25.084µs
	I0823 12:02:26.061400    4738 start.go:93] Provisioning new machine with config: &{Name:kindnet-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.0 ClusterName:kindnet-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:02:26.061428    4738 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:02:26.069814    4738 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:02:26.086510    4738 start.go:159] libmachine.API.Create for "kindnet-953000" (driver="qemu2")
	I0823 12:02:26.086547    4738 client.go:168] LocalClient.Create starting
	I0823 12:02:26.086634    4738 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:02:26.086673    4738 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:26.086684    4738 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:26.086730    4738 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:02:26.086751    4738 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:26.086761    4738 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:26.087125    4738 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:02:26.206153    4738 main.go:141] libmachine: Creating SSH key...
	I0823 12:02:26.262018    4738 main.go:141] libmachine: Creating Disk image...
	I0823 12:02:26.262024    4738 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:02:26.262156    4738 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/disk.qcow2
	I0823 12:02:26.270704    4738 main.go:141] libmachine: STDOUT: 
	I0823 12:02:26.270720    4738 main.go:141] libmachine: STDERR: 
	I0823 12:02:26.270764    4738 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/disk.qcow2 +20000M
	I0823 12:02:26.277936    4738 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:02:26.277967    4738 main.go:141] libmachine: STDERR: 
	I0823 12:02:26.277989    4738 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/disk.qcow2
	I0823 12:02:26.277999    4738 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:02:26.278042    4738 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:98:3f:6b:28:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/disk.qcow2
	I0823 12:02:26.279581    4738 main.go:141] libmachine: STDOUT: 
	I0823 12:02:26.279594    4738 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:02:26.279616    4738 client.go:171] LocalClient.Create took 193.064167ms
	I0823 12:02:28.281698    4738 start.go:128] duration metric: createHost completed in 2.22032775s
	I0823 12:02:28.281757    4738 start.go:83] releasing machines lock for "kindnet-953000", held for 2.220435333s
	W0823 12:02:28.281841    4738 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:02:28.289138    4738 out.go:177] * Deleting "kindnet-953000" in qemu2 ...
	W0823 12:02:28.309796    4738 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:02:28.309821    4738 start.go:687] Will try again in 5 seconds ...
	I0823 12:02:33.311899    4738 start.go:365] acquiring machines lock for kindnet-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:02:33.312393    4738 start.go:369] acquired machines lock for "kindnet-953000" in 382.542µs
	I0823 12:02:33.312517    4738 start.go:93] Provisioning new machine with config: &{Name:kindnet-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.0 ClusterName:kindnet-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:02:33.312864    4738 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:02:33.321575    4738 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:02:33.365914    4738 start.go:159] libmachine.API.Create for "kindnet-953000" (driver="qemu2")
	I0823 12:02:33.365957    4738 client.go:168] LocalClient.Create starting
	I0823 12:02:33.366075    4738 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:02:33.366123    4738 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:33.366153    4738 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:33.366233    4738 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:02:33.366268    4738 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:33.366281    4738 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:33.366727    4738 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:02:33.498035    4738 main.go:141] libmachine: Creating SSH key...
	I0823 12:02:33.552718    4738 main.go:141] libmachine: Creating Disk image...
	I0823 12:02:33.552723    4738 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:02:33.552865    4738 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/disk.qcow2
	I0823 12:02:33.561396    4738 main.go:141] libmachine: STDOUT: 
	I0823 12:02:33.561410    4738 main.go:141] libmachine: STDERR: 
	I0823 12:02:33.561460    4738 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/disk.qcow2 +20000M
	I0823 12:02:33.568607    4738 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:02:33.568618    4738 main.go:141] libmachine: STDERR: 
	I0823 12:02:33.568630    4738 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/disk.qcow2
	I0823 12:02:33.568634    4738 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:02:33.568675    4738 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:0e:d4:31:4f:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kindnet-953000/disk.qcow2
	I0823 12:02:33.570212    4738 main.go:141] libmachine: STDOUT: 
	I0823 12:02:33.570223    4738 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:02:33.570233    4738 client.go:171] LocalClient.Create took 204.279041ms
	I0823 12:02:35.572314    4738 start.go:128] duration metric: createHost completed in 2.259494833s
	I0823 12:02:35.572379    4738 start.go:83] releasing machines lock for "kindnet-953000", held for 2.260036292s
	W0823 12:02:35.572694    4738 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:02:35.583372    4738 out.go:177] 
	W0823 12:02:35.587493    4738 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:02:35.587519    4738 out.go:239] * 
	* 
	W0823 12:02:35.590065    4738 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:02:35.600391    4738 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
E0823 12:02:46.180867    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.730024917s)

                                                
                                                
-- stdout --
	* [flannel-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-953000 in cluster flannel-953000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-953000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:02:37.924040    4855 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:02:37.924151    4855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:02:37.924154    4855 out.go:309] Setting ErrFile to fd 2...
	I0823 12:02:37.924156    4855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:02:37.924264    4855 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:02:37.925331    4855 out.go:303] Setting JSON to false
	I0823 12:02:37.940439    4855 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3731,"bootTime":1692813626,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:02:37.940498    4855 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:02:37.945372    4855 out.go:177] * [flannel-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:02:37.952359    4855 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:02:37.952430    4855 notify.go:220] Checking for updates...
	I0823 12:02:37.959261    4855 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:02:37.962355    4855 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:02:37.965366    4855 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:02:37.968311    4855 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:02:37.971295    4855 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:02:37.974576    4855 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:02:37.974615    4855 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:02:37.979345    4855 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:02:37.986271    4855 start.go:298] selected driver: qemu2
	I0823 12:02:37.986276    4855 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:02:37.986283    4855 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:02:37.988160    4855 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 12:02:37.991268    4855 out.go:177] * Automatically selected the socket_vmnet network
	I0823 12:02:37.994429    4855 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:02:37.994457    4855 cni.go:84] Creating CNI manager for "flannel"
	I0823 12:02:37.994472    4855 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0823 12:02:37.994478    4855 start_flags.go:319] config:
	{Name:flannel-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:flannel-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0}
	I0823 12:02:37.998511    4855 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:02:38.005290    4855 out.go:177] * Starting control plane node flannel-953000 in cluster flannel-953000
	I0823 12:02:38.009086    4855 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:02:38.009105    4855 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:02:38.009120    4855 cache.go:57] Caching tarball of preloaded images
	I0823 12:02:38.009201    4855 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:02:38.009207    4855 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:02:38.009269    4855 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/flannel-953000/config.json ...
	I0823 12:02:38.009288    4855 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/flannel-953000/config.json: {Name:mk1cb4c4e810d76eded05348d7782565d95ef3a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:02:38.009503    4855 start.go:365] acquiring machines lock for flannel-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:02:38.009534    4855 start.go:369] acquired machines lock for "flannel-953000" in 25.167µs
	I0823 12:02:38.009545    4855 start.go:93] Provisioning new machine with config: &{Name:flannel-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.0 ClusterName:flannel-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:02:38.009576    4855 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:02:38.014285    4855 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:02:38.029614    4855 start.go:159] libmachine.API.Create for "flannel-953000" (driver="qemu2")
	I0823 12:02:38.029636    4855 client.go:168] LocalClient.Create starting
	I0823 12:02:38.029678    4855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:02:38.029702    4855 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:38.029712    4855 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:38.029752    4855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:02:38.029773    4855 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:38.029784    4855 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:38.030062    4855 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:02:38.144420    4855 main.go:141] libmachine: Creating SSH key...
	I0823 12:02:38.305075    4855 main.go:141] libmachine: Creating Disk image...
	I0823 12:02:38.305082    4855 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:02:38.305244    4855 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/disk.qcow2
	I0823 12:02:38.314082    4855 main.go:141] libmachine: STDOUT: 
	I0823 12:02:38.314097    4855 main.go:141] libmachine: STDERR: 
	I0823 12:02:38.314145    4855 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/disk.qcow2 +20000M
	I0823 12:02:38.321383    4855 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:02:38.321404    4855 main.go:141] libmachine: STDERR: 
	I0823 12:02:38.321425    4855 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/disk.qcow2
	I0823 12:02:38.321432    4855 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:02:38.321474    4855 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:15:e9:c1:4b:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/disk.qcow2
	I0823 12:02:38.323027    4855 main.go:141] libmachine: STDOUT: 
	I0823 12:02:38.323042    4855 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:02:38.323063    4855 client.go:171] LocalClient.Create took 293.429709ms
	I0823 12:02:40.325155    4855 start.go:128] duration metric: createHost completed in 2.3156405s
	I0823 12:02:40.325220    4855 start.go:83] releasing machines lock for "flannel-953000", held for 2.315756333s
	W0823 12:02:40.325335    4855 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:02:40.332858    4855 out.go:177] * Deleting "flannel-953000" in qemu2 ...
	W0823 12:02:40.353151    4855 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:02:40.353175    4855 start.go:687] Will try again in 5 seconds ...
	I0823 12:02:45.355248    4855 start.go:365] acquiring machines lock for flannel-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:02:45.355805    4855 start.go:369] acquired machines lock for "flannel-953000" in 433.5µs
	I0823 12:02:45.355917    4855 start.go:93] Provisioning new machine with config: &{Name:flannel-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.0 ClusterName:flannel-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:02:45.356232    4855 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:02:45.362974    4855 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:02:45.410434    4855 start.go:159] libmachine.API.Create for "flannel-953000" (driver="qemu2")
	I0823 12:02:45.410486    4855 client.go:168] LocalClient.Create starting
	I0823 12:02:45.410632    4855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:02:45.410694    4855 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:45.410718    4855 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:45.410792    4855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:02:45.410840    4855 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:45.410852    4855 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:45.411387    4855 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:02:45.538262    4855 main.go:141] libmachine: Creating SSH key...
	I0823 12:02:45.566103    4855 main.go:141] libmachine: Creating Disk image...
	I0823 12:02:45.566109    4855 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:02:45.566253    4855 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/disk.qcow2
	I0823 12:02:45.574742    4855 main.go:141] libmachine: STDOUT: 
	I0823 12:02:45.574755    4855 main.go:141] libmachine: STDERR: 
	I0823 12:02:45.574804    4855 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/disk.qcow2 +20000M
	I0823 12:02:45.581880    4855 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:02:45.581891    4855 main.go:141] libmachine: STDERR: 
	I0823 12:02:45.581903    4855 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/disk.qcow2
	I0823 12:02:45.581910    4855 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:02:45.581955    4855 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:a9:3a:af:08:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/flannel-953000/disk.qcow2
	I0823 12:02:45.583380    4855 main.go:141] libmachine: STDOUT: 
	I0823 12:02:45.583393    4855 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:02:45.583407    4855 client.go:171] LocalClient.Create took 172.921541ms
	I0823 12:02:47.585546    4855 start.go:128] duration metric: createHost completed in 2.229339792s
	I0823 12:02:47.585641    4855 start.go:83] releasing machines lock for "flannel-953000", held for 2.229884s
	W0823 12:02:47.586095    4855 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:02:47.595722    4855 out.go:177] 
	W0823 12:02:47.599696    4855 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:02:47.599743    4855 out.go:239] * 
	* 
	W0823 12:02:47.602286    4855 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:02:47.611670    4855 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.751171917s)

                                                
                                                
-- stdout --
	* [enable-default-cni-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-953000 in cluster enable-default-cni-953000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-953000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:02:50.015827    4976 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:02:50.015955    4976 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:02:50.015957    4976 out.go:309] Setting ErrFile to fd 2...
	I0823 12:02:50.015960    4976 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:02:50.016077    4976 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:02:50.017082    4976 out.go:303] Setting JSON to false
	I0823 12:02:50.032471    4976 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3744,"bootTime":1692813626,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:02:50.032529    4976 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:02:50.037297    4976 out.go:177] * [enable-default-cni-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:02:50.043247    4976 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:02:50.048236    4976 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:02:50.043273    4976 notify.go:220] Checking for updates...
	I0823 12:02:50.054177    4976 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:02:50.057216    4976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:02:50.060250    4976 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:02:50.063103    4976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:02:50.066548    4976 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:02:50.066590    4976 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:02:50.071230    4976 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:02:50.078189    4976 start.go:298] selected driver: qemu2
	I0823 12:02:50.078195    4976 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:02:50.078202    4976 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:02:50.080141    4976 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 12:02:50.083206    4976 out.go:177] * Automatically selected the socket_vmnet network
	E0823 12:02:50.086183    4976 start_flags.go:453] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0823 12:02:50.086193    4976 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:02:50.086225    4976 cni.go:84] Creating CNI manager for "bridge"
	I0823 12:02:50.086237    4976 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 12:02:50.086249    4976 start_flags.go:319] config:
	{Name:enable-default-cni-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:enable-default-cni-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:02:50.090308    4976 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:02:50.097180    4976 out.go:177] * Starting control plane node enable-default-cni-953000 in cluster enable-default-cni-953000
	I0823 12:02:50.101155    4976 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:02:50.101184    4976 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:02:50.101202    4976 cache.go:57] Caching tarball of preloaded images
	I0823 12:02:50.101288    4976 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:02:50.101294    4976 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:02:50.101416    4976 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/enable-default-cni-953000/config.json ...
	I0823 12:02:50.101429    4976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/enable-default-cni-953000/config.json: {Name:mk1932c2171a5d01ee6218753c5d57942e3c79a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:02:50.101646    4976 start.go:365] acquiring machines lock for enable-default-cni-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:02:50.101686    4976 start.go:369] acquired machines lock for "enable-default-cni-953000" in 29.417µs
	I0823 12:02:50.101701    4976 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.28.0 ClusterName:enable-default-cni-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:02:50.101736    4976 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:02:50.109145    4976 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:02:50.124926    4976 start.go:159] libmachine.API.Create for "enable-default-cni-953000" (driver="qemu2")
	I0823 12:02:50.124953    4976 client.go:168] LocalClient.Create starting
	I0823 12:02:50.125017    4976 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:02:50.125053    4976 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:50.125067    4976 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:50.125117    4976 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:02:50.125140    4976 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:50.125147    4976 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:50.125753    4976 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:02:50.244144    4976 main.go:141] libmachine: Creating SSH key...
	I0823 12:02:50.345895    4976 main.go:141] libmachine: Creating Disk image...
	I0823 12:02:50.345900    4976 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:02:50.346044    4976 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/disk.qcow2
	I0823 12:02:50.354841    4976 main.go:141] libmachine: STDOUT: 
	I0823 12:02:50.354854    4976 main.go:141] libmachine: STDERR: 
	I0823 12:02:50.354925    4976 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/disk.qcow2 +20000M
	I0823 12:02:50.362117    4976 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:02:50.362136    4976 main.go:141] libmachine: STDERR: 
	I0823 12:02:50.362154    4976 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/disk.qcow2
	I0823 12:02:50.362167    4976 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:02:50.362211    4976 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:a4:03:24:18:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/disk.qcow2
	I0823 12:02:50.363788    4976 main.go:141] libmachine: STDOUT: 
	I0823 12:02:50.363803    4976 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:02:50.363825    4976 client.go:171] LocalClient.Create took 238.872375ms
	I0823 12:02:52.365906    4976 start.go:128] duration metric: createHost completed in 2.264229583s
	I0823 12:02:52.365967    4976 start.go:83] releasing machines lock for "enable-default-cni-953000", held for 2.264348875s
	W0823 12:02:52.366059    4976 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:02:52.376320    4976 out.go:177] * Deleting "enable-default-cni-953000" in qemu2 ...
	W0823 12:02:52.396079    4976 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:02:52.396106    4976 start.go:687] Will try again in 5 seconds ...
	I0823 12:02:57.398237    4976 start.go:365] acquiring machines lock for enable-default-cni-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:02:57.398672    4976 start.go:369] acquired machines lock for "enable-default-cni-953000" in 328.125µs
	I0823 12:02:57.398812    4976 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.28.0 ClusterName:enable-default-cni-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:02:57.399105    4976 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:02:57.409685    4976 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:02:57.456587    4976 start.go:159] libmachine.API.Create for "enable-default-cni-953000" (driver="qemu2")
	I0823 12:02:57.456648    4976 client.go:168] LocalClient.Create starting
	I0823 12:02:57.456833    4976 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:02:57.456922    4976 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:57.456945    4976 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:57.457054    4976 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:02:57.457098    4976 main.go:141] libmachine: Decoding PEM data...
	I0823 12:02:57.457117    4976 main.go:141] libmachine: Parsing certificate...
	I0823 12:02:57.457680    4976 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:02:57.589606    4976 main.go:141] libmachine: Creating SSH key...
	I0823 12:02:57.675887    4976 main.go:141] libmachine: Creating Disk image...
	I0823 12:02:57.675892    4976 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:02:57.676055    4976 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/disk.qcow2
	I0823 12:02:57.685097    4976 main.go:141] libmachine: STDOUT: 
	I0823 12:02:57.685109    4976 main.go:141] libmachine: STDERR: 
	I0823 12:02:57.685177    4976 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/disk.qcow2 +20000M
	I0823 12:02:57.692414    4976 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:02:57.692426    4976 main.go:141] libmachine: STDERR: 
	I0823 12:02:57.692436    4976 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/disk.qcow2
	I0823 12:02:57.692450    4976 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:02:57.692490    4976 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:88:fb:78:ea:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/enable-default-cni-953000/disk.qcow2
	I0823 12:02:57.694070    4976 main.go:141] libmachine: STDOUT: 
	I0823 12:02:57.694094    4976 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:02:57.694110    4976 client.go:171] LocalClient.Create took 237.458209ms
	I0823 12:02:59.696195    4976 start.go:128] duration metric: createHost completed in 2.297144458s
	I0823 12:02:59.696283    4976 start.go:83] releasing machines lock for "enable-default-cni-953000", held for 2.297639542s
	W0823 12:02:59.696683    4976 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:02:59.707443    4976 out.go:177] 
	W0823 12:02:59.711440    4976 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:02:59.711468    4976 out.go:239] * 
	* 
	W0823 12:02:59.713898    4976 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:02:59.724136    4976 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3654713565.exe start -p stopped-upgrade-491000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3654713565.exe start -p stopped-upgrade-491000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3654713565.exe: permission denied (729.208µs)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3654713565.exe start -p stopped-upgrade-491000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3654713565.exe start -p stopped-upgrade-491000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3654713565.exe: permission denied (2.086583ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3654713565.exe start -p stopped-upgrade-491000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3654713565.exe start -p stopped-upgrade-491000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3654713565.exe: permission denied (7.13475ms)
version_upgrade_test.go:201: legacy v1.6.2 start failed: fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3654713565.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.885913167s)

                                                
                                                
-- stdout --
	* [bridge-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-953000 in cluster bridge-953000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-953000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:01.944187    5098 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:01.944297    5098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:01.944300    5098 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:01.944302    5098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:01.944411    5098 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:01.945378    5098 out.go:303] Setting JSON to false
	I0823 12:03:01.960646    5098 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3755,"bootTime":1692813626,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:03:01.960713    5098 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:03:01.965424    5098 out.go:177] * [bridge-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:03:01.972396    5098 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:03:01.972435    5098 notify.go:220] Checking for updates...
	I0823 12:03:01.979327    5098 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:03:01.982420    5098 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:03:01.985426    5098 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:03:01.988318    5098 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:03:01.991386    5098 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:03:01.994768    5098 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:01.994809    5098 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:03:01.999418    5098 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:03:02.006443    5098 start.go:298] selected driver: qemu2
	I0823 12:03:02.006450    5098 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:03:02.006457    5098 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:03:02.008400    5098 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 12:03:02.011337    5098 out.go:177] * Automatically selected the socket_vmnet network
	I0823 12:03:02.014531    5098 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:03:02.014559    5098 cni.go:84] Creating CNI manager for "bridge"
	I0823 12:03:02.014570    5098 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 12:03:02.014578    5098 start_flags.go:319] config:
	{Name:bridge-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:bridge-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0823 12:03:02.018666    5098 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:02.025454    5098 out.go:177] * Starting control plane node bridge-953000 in cluster bridge-953000
	I0823 12:03:02.029447    5098 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:03:02.029465    5098 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:03:02.029476    5098 cache.go:57] Caching tarball of preloaded images
	I0823 12:03:02.029546    5098 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:03:02.029551    5098 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:03:02.029631    5098 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/bridge-953000/config.json ...
	I0823 12:03:02.029652    5098 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/bridge-953000/config.json: {Name:mk8b9414493ff18ad61721bc1f14b353b8e251d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:03:02.029882    5098 start.go:365] acquiring machines lock for bridge-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:02.029912    5098 start.go:369] acquired machines lock for "bridge-953000" in 24.416µs
	I0823 12:03:02.029922    5098 start.go:93] Provisioning new machine with config: &{Name:bridge-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.28.0 ClusterName:bridge-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:03:02.029949    5098 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:03:02.038480    5098 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:03:02.054449    5098 start.go:159] libmachine.API.Create for "bridge-953000" (driver="qemu2")
	I0823 12:03:02.054474    5098 client.go:168] LocalClient.Create starting
	I0823 12:03:02.054525    5098 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:03:02.054552    5098 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:02.054562    5098 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:02.054602    5098 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:03:02.054621    5098 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:02.054628    5098 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:02.054942    5098 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:03:02.171161    5098 main.go:141] libmachine: Creating SSH key...
	I0823 12:03:02.478592    5098 main.go:141] libmachine: Creating Disk image...
	I0823 12:03:02.478604    5098 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:03:02.478779    5098 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/disk.qcow2
	I0823 12:03:02.487834    5098 main.go:141] libmachine: STDOUT: 
	I0823 12:03:02.487850    5098 main.go:141] libmachine: STDERR: 
	I0823 12:03:02.487913    5098 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/disk.qcow2 +20000M
	I0823 12:03:02.495096    5098 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:03:02.495110    5098 main.go:141] libmachine: STDERR: 
	I0823 12:03:02.495123    5098 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/disk.qcow2
	I0823 12:03:02.495132    5098 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:03:02.495174    5098 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:ca:d8:b6:be:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/disk.qcow2
	I0823 12:03:02.496628    5098 main.go:141] libmachine: STDOUT: 
	I0823 12:03:02.496642    5098 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:02.496661    5098 client.go:171] LocalClient.Create took 441.199334ms
	I0823 12:03:04.502865    5098 start.go:128] duration metric: createHost completed in 2.467729166s
	I0823 12:03:04.502883    5098 start.go:83] releasing machines lock for "bridge-953000", held for 2.467797458s
	W0823 12:03:04.502900    5098 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:04.518371    5098 out.go:177] * Deleting "bridge-953000" in qemu2 ...
	W0823 12:03:04.526505    5098 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:04.526512    5098 start.go:687] Will try again in 5 seconds ...
	I0823 12:03:09.536916    5098 start.go:365] acquiring machines lock for bridge-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:09.537538    5098 start.go:369] acquired machines lock for "bridge-953000" in 531.459µs
	I0823 12:03:09.537686    5098 start.go:93] Provisioning new machine with config: &{Name:bridge-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.28.0 ClusterName:bridge-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:03:09.538021    5098 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:03:09.546472    5098 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:03:09.596092    5098 start.go:159] libmachine.API.Create for "bridge-953000" (driver="qemu2")
	I0823 12:03:09.596143    5098 client.go:168] LocalClient.Create starting
	I0823 12:03:09.596271    5098 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:03:09.596338    5098 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:09.596372    5098 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:09.596543    5098 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:03:09.596587    5098 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:09.596602    5098 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:09.597239    5098 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:03:09.725219    5098 main.go:141] libmachine: Creating SSH key...
	I0823 12:03:09.757651    5098 main.go:141] libmachine: Creating Disk image...
	I0823 12:03:09.757662    5098 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:03:09.757798    5098 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/disk.qcow2
	I0823 12:03:09.766353    5098 main.go:141] libmachine: STDOUT: 
	I0823 12:03:09.766365    5098 main.go:141] libmachine: STDERR: 
	I0823 12:03:09.766419    5098 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/disk.qcow2 +20000M
	I0823 12:03:09.773571    5098 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:03:09.773582    5098 main.go:141] libmachine: STDERR: 
	I0823 12:03:09.773595    5098 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/disk.qcow2
	I0823 12:03:09.773602    5098 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:03:09.773646    5098 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:f4:6b:95:84:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/bridge-953000/disk.qcow2
	I0823 12:03:09.775100    5098 main.go:141] libmachine: STDOUT: 
	I0823 12:03:09.775113    5098 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:09.775125    5098 client.go:171] LocalClient.Create took 178.731625ms
	I0823 12:03:11.779828    5098 start.go:128] duration metric: createHost completed in 2.238892875s
	I0823 12:03:11.779905    5098 start.go:83] releasing machines lock for "bridge-953000", held for 2.239431292s
	W0823 12:03:11.780249    5098 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:11.792858    5098 out.go:177] 
	W0823 12:03:11.796849    5098 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:03:11.796900    5098 out.go:239] * 
	* 
	W0823 12:03:11.798891    5098 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:03:11.810837    5098 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-491000
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-491000: exit status 85 (113.69225ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-953000 sudo                               | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo                               | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo                               | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo cat                           | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo cat                           | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo                               | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo                               | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo cat                           | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo docker                        | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo                               | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo                               | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo cat                           | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo cat                           | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo                               | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo                               | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo                               | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo cat                           | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo cat                           | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo                               | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo                               | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo                               | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo find                          | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p flannel-953000 sudo crio                          | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p flannel-953000                                    | flannel-953000            | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT | 23 Aug 23 12:02 PDT |
	| start   | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:02 PDT |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo cat /etc/nsswitch.conf                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo cat /etc/hosts                                  |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo cat /etc/resolv.conf                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo crictl pods                                     |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo crictl ps --all                                 |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo find /etc/cni -type f                           |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo ip a s                                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo ip r s                                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo iptables-save                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo iptables -t nat -L -n -v                        |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000 sudo cat                | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000 sudo cat                | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000 sudo cat                | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-953000                         | enable-default-cni-953000 | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT | 23 Aug 23 12:03 PDT |
	| start   | -p bridge-953000 --memory=3072                       | bridge-953000             | jenkins | v1.31.2 | 23 Aug 23 12:03 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=bridge --driver=qemu2                          |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/23 12:03:01
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0823 12:03:01.944187    5098 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:01.944297    5098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:01.944300    5098 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:01.944302    5098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:01.944411    5098 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:01.945378    5098 out.go:303] Setting JSON to false
	I0823 12:03:01.960646    5098 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3755,"bootTime":1692813626,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:03:01.960713    5098 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:03:01.965424    5098 out.go:177] * [bridge-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:03:01.972396    5098 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:03:01.972435    5098 notify.go:220] Checking for updates...
	I0823 12:03:01.979327    5098 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:03:01.982420    5098 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:03:01.985426    5098 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:03:01.988318    5098 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:03:01.991386    5098 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:03:01.994768    5098 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:01.994809    5098 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:03:01.999418    5098 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:03:02.006443    5098 start.go:298] selected driver: qemu2
	I0823 12:03:02.006450    5098 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:03:02.006457    5098 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:03:02.008400    5098 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 12:03:02.011337    5098 out.go:177] * Automatically selected the socket_vmnet network
	I0823 12:03:02.014531    5098 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:03:02.014559    5098 cni.go:84] Creating CNI manager for "bridge"
	I0823 12:03:02.014570    5098 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 12:03:02.014578    5098 start_flags.go:319] config:
	{Name:bridge-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:bridge-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0823 12:03:02.018666    5098 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:02.025454    5098 out.go:177] * Starting control plane node bridge-953000 in cluster bridge-953000
	I0823 12:03:02.029447    5098 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:03:02.029465    5098 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:03:02.029476    5098 cache.go:57] Caching tarball of preloaded images
	I0823 12:03:02.029546    5098 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:03:02.029551    5098 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:03:02.029631    5098 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/bridge-953000/config.json ...
	I0823 12:03:02.029652    5098 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/bridge-953000/config.json: {Name:mk8b9414493ff18ad61721bc1f14b353b8e251d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:03:02.029882    5098 start.go:365] acquiring machines lock for bridge-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:02.029912    5098 start.go:369] acquired machines lock for "bridge-953000" in 24.416µs
	I0823 12:03:02.029922    5098 start.go:93] Provisioning new machine with config: &{Name:bridge-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.28.0 ClusterName:bridge-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:03:02.029949    5098 start.go:125] createHost starting for "" (driver="qemu2")
	
	* 
	* Profile "stopped-upgrade-491000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-491000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:220: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-953000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.676843333s)

                                                
                                                
-- stdout --
	* [kubenet-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-953000 in cluster kubenet-953000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-953000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:04.408971    5126 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:04.409084    5126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:04.409087    5126 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:04.409089    5126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:04.409194    5126 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:04.410202    5126 out.go:303] Setting JSON to false
	I0823 12:03:04.425300    5126 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3758,"bootTime":1692813626,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:03:04.425355    5126 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:03:04.429233    5126 out.go:177] * [kubenet-953000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:03:04.436321    5126 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:03:04.436387    5126 notify.go:220] Checking for updates...
	I0823 12:03:04.440199    5126 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:03:04.443260    5126 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:03:04.446315    5126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:03:04.449289    5126 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:03:04.452352    5126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:03:04.455627    5126 config.go:182] Loaded profile config "bridge-953000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:04.455693    5126 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:04.455730    5126 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:03:04.459213    5126 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:03:04.466341    5126 start.go:298] selected driver: qemu2
	I0823 12:03:04.466346    5126 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:03:04.466352    5126 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:03:04.468161    5126 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 12:03:04.469599    5126 out.go:177] * Automatically selected the socket_vmnet network
	I0823 12:03:04.472435    5126 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:03:04.472464    5126 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0823 12:03:04.472469    5126 start_flags.go:319] config:
	{Name:kubenet-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubenet-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0823 12:03:04.476527    5126 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:04.483253    5126 out.go:177] * Starting control plane node kubenet-953000 in cluster kubenet-953000
	I0823 12:03:04.487344    5126 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:03:04.487362    5126 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:03:04.487377    5126 cache.go:57] Caching tarball of preloaded images
	I0823 12:03:04.487438    5126 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:03:04.487445    5126 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:03:04.487513    5126 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/kubenet-953000/config.json ...
	I0823 12:03:04.487525    5126 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/kubenet-953000/config.json: {Name:mk43fb7a42b3c9d8717d72bcc6dc659422a39e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:03:04.487746    5126 start.go:365] acquiring machines lock for kubenet-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:04.502913    5126 start.go:369] acquired machines lock for "kubenet-953000" in 15.13125ms
	I0823 12:03:04.502954    5126 start.go:93] Provisioning new machine with config: &{Name:kubenet-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.0 ClusterName:kubenet-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:03:04.502993    5126 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:03:04.511441    5126 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:03:04.528896    5126 start.go:159] libmachine.API.Create for "kubenet-953000" (driver="qemu2")
	I0823 12:03:04.528928    5126 client.go:168] LocalClient.Create starting
	I0823 12:03:04.528979    5126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:03:04.529009    5126 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:04.529018    5126 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:04.529064    5126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:03:04.529084    5126 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:04.529100    5126 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:04.531711    5126 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:03:04.647961    5126 main.go:141] libmachine: Creating SSH key...
	I0823 12:03:04.688142    5126 main.go:141] libmachine: Creating Disk image...
	I0823 12:03:04.688148    5126 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:03:04.688273    5126 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/disk.qcow2
	I0823 12:03:04.696691    5126 main.go:141] libmachine: STDOUT: 
	I0823 12:03:04.696703    5126 main.go:141] libmachine: STDERR: 
	I0823 12:03:04.696752    5126 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/disk.qcow2 +20000M
	I0823 12:03:04.703941    5126 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:03:04.703953    5126 main.go:141] libmachine: STDERR: 
	I0823 12:03:04.703963    5126 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/disk.qcow2
	I0823 12:03:04.703968    5126 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:03:04.703998    5126 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:91:0f:0f:19:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/disk.qcow2
	I0823 12:03:04.705512    5126 main.go:141] libmachine: STDOUT: 
	I0823 12:03:04.705524    5126 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:04.705545    5126 client.go:171] LocalClient.Create took 176.273375ms
	I0823 12:03:06.711448    5126 start.go:128] duration metric: createHost completed in 2.2044625s
	I0823 12:03:06.711515    5126 start.go:83] releasing machines lock for "kubenet-953000", held for 2.204624125s
	W0823 12:03:06.711564    5126 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:06.724644    5126 out.go:177] * Deleting "kubenet-953000" in qemu2 ...
	W0823 12:03:06.743228    5126 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:06.743257    5126 start.go:687] Will try again in 5 seconds ...
	I0823 12:03:11.752606    5126 start.go:365] acquiring machines lock for kubenet-953000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:11.779992    5126 start.go:369] acquired machines lock for "kubenet-953000" in 27.267625ms
	I0823 12:03:11.780160    5126 start.go:93] Provisioning new machine with config: &{Name:kubenet-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.0 ClusterName:kubenet-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:03:11.780424    5126 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:03:11.788816    5126 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0823 12:03:11.833941    5126 start.go:159] libmachine.API.Create for "kubenet-953000" (driver="qemu2")
	I0823 12:03:11.833995    5126 client.go:168] LocalClient.Create starting
	I0823 12:03:11.834139    5126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:03:11.834194    5126 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:11.834210    5126 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:11.834282    5126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:03:11.834311    5126 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:11.834324    5126 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:11.834841    5126 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:03:11.963661    5126 main.go:141] libmachine: Creating SSH key...
	I0823 12:03:12.015255    5126 main.go:141] libmachine: Creating Disk image...
	I0823 12:03:12.015265    5126 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:03:12.015461    5126 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/disk.qcow2
	I0823 12:03:12.024735    5126 main.go:141] libmachine: STDOUT: 
	I0823 12:03:12.024754    5126 main.go:141] libmachine: STDERR: 
	I0823 12:03:12.024810    5126 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/disk.qcow2 +20000M
	I0823 12:03:12.032745    5126 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:03:12.032761    5126 main.go:141] libmachine: STDERR: 
	I0823 12:03:12.032779    5126 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/disk.qcow2
	I0823 12:03:12.032784    5126 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:03:12.032826    5126 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:ad:8f:f4:81:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/kubenet-953000/disk.qcow2
	I0823 12:03:12.034395    5126 main.go:141] libmachine: STDOUT: 
	I0823 12:03:12.034408    5126 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:12.034422    5126 client.go:171] LocalClient.Create took 200.179833ms
	I0823 12:03:14.038554    5126 start.go:128] duration metric: createHost completed in 2.25560125s
	I0823 12:03:14.038570    5126 start.go:83] releasing machines lock for "kubenet-953000", held for 2.256042292s
	W0823 12:03:14.038640    5126 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:14.049918    5126 out.go:177] 
	W0823 12:03:14.053876    5126 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:03:14.053881    5126 out.go:239] * 
	* 
	W0823 12:03:14.054723    5126 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:03:14.064958    5126 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-499000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-499000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.939132625s)

                                                
                                                
-- stdout --
	* [old-k8s-version-499000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-499000 in cluster old-k8s-version-499000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-499000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:13.944584    5240 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:13.944695    5240 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:13.944698    5240 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:13.944700    5240 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:13.944798    5240 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:13.945918    5240 out.go:303] Setting JSON to false
	I0823 12:03:13.961309    5240 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3767,"bootTime":1692813626,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:03:13.961382    5240 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:03:13.965935    5240 out.go:177] * [old-k8s-version-499000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:03:13.972884    5240 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:03:13.976870    5240 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:03:13.972944    5240 notify.go:220] Checking for updates...
	I0823 12:03:13.982753    5240 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:03:13.985856    5240 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:03:13.988899    5240 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:03:13.991772    5240 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:03:13.995168    5240 config.go:182] Loaded profile config "kubenet-953000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:13.995234    5240 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:13.995276    5240 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:03:13.999896    5240 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:03:14.006831    5240 start.go:298] selected driver: qemu2
	I0823 12:03:14.006835    5240 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:03:14.006840    5240 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:03:14.008729    5240 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 12:03:14.011859    5240 out.go:177] * Automatically selected the socket_vmnet network
	I0823 12:03:14.015176    5240 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:03:14.015206    5240 cni.go:84] Creating CNI manager for ""
	I0823 12:03:14.015215    5240 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0823 12:03:14.015223    5240 start_flags.go:319] config:
	{Name:old-k8s-version-499000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-499000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0}
	I0823 12:03:14.019917    5240 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:14.026960    5240 out.go:177] * Starting control plane node old-k8s-version-499000 in cluster old-k8s-version-499000
	I0823 12:03:14.030871    5240 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0823 12:03:14.030892    5240 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0823 12:03:14.030911    5240 cache.go:57] Caching tarball of preloaded images
	I0823 12:03:14.030966    5240 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:03:14.030973    5240 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0823 12:03:14.031054    5240 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/old-k8s-version-499000/config.json ...
	I0823 12:03:14.031067    5240 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/old-k8s-version-499000/config.json: {Name:mkda32d62c49cbb0828af237fed30de425b368c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:03:14.031264    5240 start.go:365] acquiring machines lock for old-k8s-version-499000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:14.038589    5240 start.go:369] acquired machines lock for "old-k8s-version-499000" in 7.311042ms
	I0823 12:03:14.038604    5240 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-499000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.16.0 ClusterName:old-k8s-version-499000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:03:14.038650    5240 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:03:14.042708    5240 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 12:03:14.058574    5240 start.go:159] libmachine.API.Create for "old-k8s-version-499000" (driver="qemu2")
	I0823 12:03:14.058593    5240 client.go:168] LocalClient.Create starting
	I0823 12:03:14.058668    5240 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:03:14.058693    5240 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:14.058706    5240 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:14.058751    5240 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:03:14.058770    5240 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:14.058777    5240 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:14.069452    5240 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:03:14.192308    5240 main.go:141] libmachine: Creating SSH key...
	I0823 12:03:14.271937    5240 main.go:141] libmachine: Creating Disk image...
	I0823 12:03:14.271947    5240 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:03:14.272110    5240 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/disk.qcow2
	I0823 12:03:14.281256    5240 main.go:141] libmachine: STDOUT: 
	I0823 12:03:14.281276    5240 main.go:141] libmachine: STDERR: 
	I0823 12:03:14.281342    5240 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/disk.qcow2 +20000M
	I0823 12:03:14.289109    5240 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:03:14.289129    5240 main.go:141] libmachine: STDERR: 
	I0823 12:03:14.289146    5240 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/disk.qcow2
	I0823 12:03:14.289151    5240 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:03:14.289188    5240 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:79:94:d7:1d:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/disk.qcow2
	I0823 12:03:14.290798    5240 main.go:141] libmachine: STDOUT: 
	I0823 12:03:14.290815    5240 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:14.290836    5240 client.go:171] LocalClient.Create took 231.994833ms
	I0823 12:03:16.293884    5240 start.go:128] duration metric: createHost completed in 2.253061375s
	I0823 12:03:16.293900    5240 start.go:83] releasing machines lock for "old-k8s-version-499000", held for 2.2531415s
	W0823 12:03:16.293912    5240 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:16.308794    5240 out.go:177] * Deleting "old-k8s-version-499000" in qemu2 ...
	W0823 12:03:16.316449    5240 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:16.316460    5240 start.go:687] Will try again in 5 seconds ...
	I0823 12:03:21.320549    5240 start.go:365] acquiring machines lock for old-k8s-version-499000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:21.321044    5240 start.go:369] acquired machines lock for "old-k8s-version-499000" in 410.292µs
	I0823 12:03:21.321178    5240 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-499000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.16.0 ClusterName:old-k8s-version-499000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:03:21.321531    5240 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:03:21.327168    5240 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 12:03:21.374083    5240 start.go:159] libmachine.API.Create for "old-k8s-version-499000" (driver="qemu2")
	I0823 12:03:21.374152    5240 client.go:168] LocalClient.Create starting
	I0823 12:03:21.374322    5240 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:03:21.374392    5240 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:21.374411    5240 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:21.374494    5240 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:03:21.374535    5240 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:21.374555    5240 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:21.375115    5240 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:03:21.500391    5240 main.go:141] libmachine: Creating SSH key...
	I0823 12:03:21.803710    5240 main.go:141] libmachine: Creating Disk image...
	I0823 12:03:21.803720    5240 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:03:21.803872    5240 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/disk.qcow2
	I0823 12:03:21.812583    5240 main.go:141] libmachine: STDOUT: 
	I0823 12:03:21.812602    5240 main.go:141] libmachine: STDERR: 
	I0823 12:03:21.812660    5240 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/disk.qcow2 +20000M
	I0823 12:03:21.820150    5240 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:03:21.820164    5240 main.go:141] libmachine: STDERR: 
	I0823 12:03:21.820179    5240 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/disk.qcow2
	I0823 12:03:21.820193    5240 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:03:21.820234    5240 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:67:29:b2:6f:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/disk.qcow2
	I0823 12:03:21.821749    5240 main.go:141] libmachine: STDOUT: 
	I0823 12:03:21.821762    5240 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:21.821777    5240 client.go:171] LocalClient.Create took 447.342084ms
	I0823 12:03:23.825117    5240 start.go:128] duration metric: createHost completed in 2.502083875s
	I0823 12:03:23.825199    5240 start.go:83] releasing machines lock for "old-k8s-version-499000", held for 2.502669459s
	W0823 12:03:23.825630    5240 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-499000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-499000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:23.838084    5240 out.go:177] 
	W0823 12:03:23.842254    5240 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:03:23.842294    5240 out.go:239] * 
	* 
	W0823 12:03:23.844711    5240 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:03:23.856032    5240 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-499000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000: exit status 7 (51.988542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-499000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-513000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-513000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0: exit status 80 (10.067682083s)

                                                
                                                
-- stdout --
	* [no-preload-513000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-513000 in cluster no-preload-513000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-513000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:16.190906    5346 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:16.191023    5346 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:16.191025    5346 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:16.191028    5346 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:16.191151    5346 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:16.192184    5346 out.go:303] Setting JSON to false
	I0823 12:03:16.207448    5346 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3770,"bootTime":1692813626,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:03:16.207505    5346 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:03:16.212689    5346 out.go:177] * [no-preload-513000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:03:16.219785    5346 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:03:16.219789    5346 notify.go:220] Checking for updates...
	I0823 12:03:16.223755    5346 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:03:16.226735    5346 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:03:16.229698    5346 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:03:16.232694    5346 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:03:16.235647    5346 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:03:16.239115    5346 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:16.239179    5346 config.go:182] Loaded profile config "old-k8s-version-499000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0823 12:03:16.239227    5346 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:03:16.243762    5346 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:03:16.250687    5346 start.go:298] selected driver: qemu2
	I0823 12:03:16.250694    5346 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:03:16.250701    5346 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:03:16.252699    5346 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 12:03:16.255712    5346 out.go:177] * Automatically selected the socket_vmnet network
	I0823 12:03:16.258705    5346 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:03:16.258730    5346 cni.go:84] Creating CNI manager for ""
	I0823 12:03:16.258739    5346 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 12:03:16.258744    5346 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 12:03:16.258756    5346 start_flags.go:319] config:
	{Name:no-preload-513000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:no-preload-513000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0}
	I0823 12:03:16.262931    5346 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:16.269801    5346 out.go:177] * Starting control plane node no-preload-513000 in cluster no-preload-513000
	I0823 12:03:16.273751    5346 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:03:16.273843    5346 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/no-preload-513000/config.json ...
	I0823 12:03:16.273865    5346 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/no-preload-513000/config.json: {Name:mk1e9ebb575b46c8ff78d13790830993ae97c372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:03:16.273905    5346 cache.go:107] acquiring lock: {Name:mkbe5fdb0accb10720d891c59f16f2516569ec3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:16.273906    5346 cache.go:107] acquiring lock: {Name:mk14ef687e087e3d018c03d03612817b680a4af4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:16.273941    5346 cache.go:107] acquiring lock: {Name:mkbcfc6d22278ee41b4cce4841e3a694da498bc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:16.274024    5346 cache.go:107] acquiring lock: {Name:mkd8a019f05b5c31f859ac085e05a97c2ef2a5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:16.274060    5346 cache.go:107] acquiring lock: {Name:mk9549ee4e302f062f08cb322beff50a1bfaddda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:16.274104    5346 start.go:365] acquiring machines lock for no-preload-513000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:16.274095    5346 cache.go:107] acquiring lock: {Name:mk4fcbb268cd9db9e3d5cd55562538ad1bf3a064 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:16.274108    5346 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.0
	I0823 12:03:16.274096    5346 cache.go:107] acquiring lock: {Name:mkd029ec316e007a0d89190c7741da41ca847750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:16.274120    5346 cache.go:107] acquiring lock: {Name:mk11a49035bb9e2e5676ec785ce9875eff286241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:16.274472    5346 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0823 12:03:16.274484    5346 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0823 12:03:16.274500    5346 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.0
	I0823 12:03:16.274522    5346 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0823 12:03:16.274545    5346 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.0
	I0823 12:03:16.274608    5346 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.0
	I0823 12:03:16.274617    5346 cache.go:115] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0823 12:03:16.274679    5346 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 688.875µs
	I0823 12:03:16.274686    5346 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0823 12:03:16.281621    5346 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.0
	I0823 12:03:16.281631    5346 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0823 12:03:16.281658    5346 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0823 12:03:16.281672    5346 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0823 12:03:16.281722    5346 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.0
	I0823 12:03:16.281762    5346 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.0
	I0823 12:03:16.282303    5346 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.0
	I0823 12:03:16.294003    5346 start.go:369] acquired machines lock for "no-preload-513000" in 19.864ms
	I0823 12:03:16.294036    5346 start.go:93] Provisioning new machine with config: &{Name:no-preload-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.28.0 ClusterName:no-preload-513000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:03:16.294136    5346 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:03:16.301733    5346 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 12:03:16.316510    5346 start.go:159] libmachine.API.Create for "no-preload-513000" (driver="qemu2")
	I0823 12:03:16.316530    5346 client.go:168] LocalClient.Create starting
	I0823 12:03:16.316616    5346 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:03:16.316639    5346 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:16.316652    5346 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:16.316690    5346 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:03:16.316709    5346 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:16.316719    5346 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:16.317051    5346 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:03:16.440990    5346 main.go:141] libmachine: Creating SSH key...
	I0823 12:03:16.478706    5346 main.go:141] libmachine: Creating Disk image...
	I0823 12:03:16.478716    5346 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:03:16.478885    5346 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/disk.qcow2
	I0823 12:03:16.487702    5346 main.go:141] libmachine: STDOUT: 
	I0823 12:03:16.487723    5346 main.go:141] libmachine: STDERR: 
	I0823 12:03:16.487782    5346 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/disk.qcow2 +20000M
	I0823 12:03:16.495813    5346 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:03:16.495837    5346 main.go:141] libmachine: STDERR: 
	I0823 12:03:16.495858    5346 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/disk.qcow2
	I0823 12:03:16.495866    5346 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:03:16.495918    5346 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:26:46:3d:fe:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/disk.qcow2
	I0823 12:03:16.497617    5346 main.go:141] libmachine: STDOUT: 
	I0823 12:03:16.497632    5346 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:16.497652    5346 client.go:171] LocalClient.Create took 180.9585ms
	I0823 12:03:16.851205    5346 cache.go:162] opening:  /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0
	I0823 12:03:16.977621    5346 cache.go:162] opening:  /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0823 12:03:17.133592    5346 cache.go:157] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0823 12:03:17.133609    5346 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 858.956667ms
	I0823 12:03:17.133616    5346 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0823 12:03:17.184663    5346 cache.go:162] opening:  /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0823 12:03:17.343209    5346 cache.go:162] opening:  /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0823 12:03:17.552097    5346 cache.go:162] opening:  /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0
	I0823 12:03:17.788384    5346 cache.go:162] opening:  /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0
	I0823 12:03:17.983996    5346 cache.go:162] opening:  /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0
	I0823 12:03:18.259158    5346 cache.go:157] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0823 12:03:18.259206    5346 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 1.983426334s
	I0823 12:03:18.259238    5346 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0823 12:03:18.499497    5346 start.go:128] duration metric: createHost completed in 2.203509042s
	I0823 12:03:18.499548    5346 start.go:83] releasing machines lock for "no-preload-513000", held for 2.203711083s
	W0823 12:03:18.499614    5346 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:18.511590    5346 out.go:177] * Deleting "no-preload-513000" in qemu2 ...
	W0823 12:03:18.532769    5346 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:18.532815    5346 start.go:687] Will try again in 5 seconds ...
	I0823 12:03:19.784030    5346 cache.go:157] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0 exists
	I0823 12:03:19.784083    5346 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.0" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0" took 3.507246s
	I0823 12:03:19.784118    5346 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.0 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0 succeeded
	I0823 12:03:20.746585    5346 cache.go:157] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0 exists
	I0823 12:03:20.746661    5346 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.0" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0" took 4.469328125s
	I0823 12:03:20.746709    5346 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.0 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0 succeeded
	I0823 12:03:21.466672    5346 cache.go:157] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0 exists
	I0823 12:03:21.466689    5346 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.0" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0" took 5.18889825s
	I0823 12:03:21.466698    5346 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.0 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0 succeeded
	I0823 12:03:21.590742    5346 cache.go:157] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0 exists
	I0823 12:03:21.590756    5346 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.0" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0" took 5.312732208s
	I0823 12:03:21.590763    5346 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.0 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0 succeeded
	I0823 12:03:23.536328    5346 start.go:365] acquiring machines lock for no-preload-513000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:23.825292    5346 start.go:369] acquired machines lock for "no-preload-513000" in 288.727209ms
	I0823 12:03:23.825484    5346 start.go:93] Provisioning new machine with config: &{Name:no-preload-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.28.0 ClusterName:no-preload-513000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:03:23.825694    5346 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:03:23.834133    5346 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 12:03:23.881434    5346 start.go:159] libmachine.API.Create for "no-preload-513000" (driver="qemu2")
	I0823 12:03:23.881472    5346 client.go:168] LocalClient.Create starting
	I0823 12:03:23.881615    5346 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:03:23.881679    5346 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:23.881703    5346 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:23.881781    5346 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:03:23.881813    5346 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:23.881831    5346 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:23.882445    5346 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:03:24.015823    5346 main.go:141] libmachine: Creating SSH key...
	I0823 12:03:24.164546    5346 main.go:141] libmachine: Creating Disk image...
	I0823 12:03:24.164553    5346 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:03:24.164699    5346 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/disk.qcow2
	I0823 12:03:24.179844    5346 main.go:141] libmachine: STDOUT: 
	I0823 12:03:24.179867    5346 main.go:141] libmachine: STDERR: 
	I0823 12:03:24.179942    5346 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/disk.qcow2 +20000M
	I0823 12:03:24.187799    5346 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:03:24.187815    5346 main.go:141] libmachine: STDERR: 
	I0823 12:03:24.187843    5346 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/disk.qcow2
	I0823 12:03:24.187849    5346 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:03:24.187903    5346 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:f9:b7:e2:88:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/disk.qcow2
	I0823 12:03:24.189628    5346 main.go:141] libmachine: STDOUT: 
	I0823 12:03:24.189648    5346 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:24.189661    5346 client.go:171] LocalClient.Create took 308.018541ms
	I0823 12:03:24.719218    5346 cache.go:157] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0823 12:03:24.719290    5346 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 8.439518041s
	I0823 12:03:24.719332    5346 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0823 12:03:24.719425    5346 cache.go:87] Successfully saved all images to host disk.
	I0823 12:03:26.192149    5346 start.go:128] duration metric: createHost completed in 2.365237959s
	I0823 12:03:26.192223    5346 start.go:83] releasing machines lock for "no-preload-513000", held for 2.365722209s
	W0823 12:03:26.192554    5346 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:26.205092    5346 out.go:177] 
	W0823 12:03:26.209169    5346 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:03:26.209198    5346 out.go:239] * 
	* 
	W0823 12:03:26.212047    5346 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:03:26.222017    5346 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-513000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000: exit status 7 (57.725792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-513000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-499000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-499000 create -f testdata/busybox.yaml: exit status 1 (31.48075ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-499000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000: exit status 7 (33.30475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-499000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000: exit status 7 (33.144459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-499000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-499000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-499000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-499000 describe deploy/metrics-server -n kube-system: exit status 1 (26.989958ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-499000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-499000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000: exit status 7 (29.687708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-499000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-499000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-499000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (7.000734833s)

                                                
                                                
-- stdout --
	* [old-k8s-version-499000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.0
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-499000 in cluster old-k8s-version-499000
	* Restarting existing qemu2 VM for "old-k8s-version-499000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-499000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:24.306196    5481 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:24.306288    5481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:24.306291    5481 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:24.306293    5481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:24.306417    5481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:24.307364    5481 out.go:303] Setting JSON to false
	I0823 12:03:24.322483    5481 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3778,"bootTime":1692813626,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:03:24.322583    5481 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:03:24.327087    5481 out.go:177] * [old-k8s-version-499000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:03:24.338074    5481 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:03:24.334090    5481 notify.go:220] Checking for updates...
	I0823 12:03:24.343986    5481 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:03:24.350899    5481 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:03:24.359035    5481 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:03:24.367023    5481 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:03:24.374952    5481 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:03:24.379312    5481 config.go:182] Loaded profile config "old-k8s-version-499000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0823 12:03:24.383021    5481 out.go:177] * Kubernetes 1.28.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.0
	I0823 12:03:24.387049    5481 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:03:24.391020    5481 out.go:177] * Using the qemu2 driver based on existing profile
	I0823 12:03:24.399043    5481 start.go:298] selected driver: qemu2
	I0823 12:03:24.399048    5481 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-499000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.16.0 ClusterName:old-k8s-version-499000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:03:24.399124    5481 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:03:24.401165    5481 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:03:24.401197    5481 cni.go:84] Creating CNI manager for ""
	I0823 12:03:24.401203    5481 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0823 12:03:24.401208    5481 start_flags.go:319] config:
	{Name:old-k8s-version-499000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-499000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:03:24.405325    5481 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:24.412976    5481 out.go:177] * Starting control plane node old-k8s-version-499000 in cluster old-k8s-version-499000
	I0823 12:03:24.417052    5481 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0823 12:03:24.417077    5481 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0823 12:03:24.417124    5481 cache.go:57] Caching tarball of preloaded images
	I0823 12:03:24.417187    5481 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:03:24.417193    5481 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0823 12:03:24.417518    5481 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/old-k8s-version-499000/config.json ...
	I0823 12:03:24.417844    5481 start.go:365] acquiring machines lock for old-k8s-version-499000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:26.192378    5481 start.go:369] acquired machines lock for "old-k8s-version-499000" in 1.773600875s
	I0823 12:03:26.192555    5481 start.go:96] Skipping create...Using existing machine configuration
	I0823 12:03:26.192589    5481 fix.go:54] fixHost starting: 
	I0823 12:03:26.193294    5481 fix.go:102] recreateIfNeeded on old-k8s-version-499000: state=Stopped err=<nil>
	W0823 12:03:26.193330    5481 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 12:03:26.205092    5481 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-499000" ...
	I0823 12:03:26.209248    5481 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:67:29:b2:6f:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/disk.qcow2
	I0823 12:03:26.218884    5481 main.go:141] libmachine: STDOUT: 
	I0823 12:03:26.218942    5481 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:26.219089    5481 fix.go:56] fixHost completed within 26.490375ms
	I0823 12:03:26.219108    5481 start.go:83] releasing machines lock for "old-k8s-version-499000", held for 26.67525ms
	W0823 12:03:26.219140    5481 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:03:26.219289    5481 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:26.219305    5481 start.go:687] Will try again in 5 seconds ...
	I0823 12:03:31.223548    5481 start.go:365] acquiring machines lock for old-k8s-version-499000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:31.223915    5481 start.go:369] acquired machines lock for "old-k8s-version-499000" in 291.333µs
	I0823 12:03:31.224045    5481 start.go:96] Skipping create...Using existing machine configuration
	I0823 12:03:31.224072    5481 fix.go:54] fixHost starting: 
	I0823 12:03:31.224826    5481 fix.go:102] recreateIfNeeded on old-k8s-version-499000: state=Stopped err=<nil>
	W0823 12:03:31.224851    5481 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 12:03:31.230432    5481 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-499000" ...
	I0823 12:03:31.238667    5481 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:67:29:b2:6f:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/old-k8s-version-499000/disk.qcow2
	I0823 12:03:31.247732    5481 main.go:141] libmachine: STDOUT: 
	I0823 12:03:31.247785    5481 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:31.247877    5481 fix.go:56] fixHost completed within 23.778ms
	I0823 12:03:31.247893    5481 start.go:83] releasing machines lock for "old-k8s-version-499000", held for 23.951667ms
	W0823 12:03:31.248048    5481 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-499000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-499000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:31.255433    5481 out.go:177] 
	W0823 12:03:31.259524    5481 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:03:31.259548    5481 out.go:239] * 
	* 
	W0823 12:03:31.261872    5481 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:03:31.270440    5481 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-499000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000: exit status 7 (68.058375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-499000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (7.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-513000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-513000 create -f testdata/busybox.yaml: exit status 1 (28.898ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-513000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000: exit status 7 (29.021084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-513000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000: exit status 7 (28.046833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-513000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-513000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-513000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-513000 describe deploy/metrics-server -n kube-system: exit status 1 (26.108084ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-513000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-513000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000: exit status 7 (28.897542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-513000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-513000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-513000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0: exit status 80 (5.168424083s)

                                                
                                                
-- stdout --
	* [no-preload-513000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-513000 in cluster no-preload-513000
	* Restarting existing qemu2 VM for "no-preload-513000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-513000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:26.679380    5506 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:26.679510    5506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:26.679513    5506 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:26.679515    5506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:26.679621    5506 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:26.680545    5506 out.go:303] Setting JSON to false
	I0823 12:03:26.695914    5506 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3780,"bootTime":1692813626,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:03:26.695982    5506 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:03:26.700757    5506 out.go:177] * [no-preload-513000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:03:26.708687    5506 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:03:26.712553    5506 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:03:26.708721    5506 notify.go:220] Checking for updates...
	I0823 12:03:26.718696    5506 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:03:26.721765    5506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:03:26.724784    5506 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:03:26.727738    5506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:03:26.730952    5506 config.go:182] Loaded profile config "no-preload-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:26.731207    5506 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:03:26.735700    5506 out.go:177] * Using the qemu2 driver based on existing profile
	I0823 12:03:26.742787    5506 start.go:298] selected driver: qemu2
	I0823 12:03:26.742791    5506 start.go:902] validating driver "qemu2" against &{Name:no-preload-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.0 ClusterName:no-preload-513000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:03:26.742853    5506 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:03:26.744720    5506 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:03:26.744749    5506 cni.go:84] Creating CNI manager for ""
	I0823 12:03:26.744756    5506 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 12:03:26.744762    5506 start_flags.go:319] config:
	{Name:no-preload-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:no-preload-513000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:03:26.748741    5506 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:26.755764    5506 out.go:177] * Starting control plane node no-preload-513000 in cluster no-preload-513000
	I0823 12:03:26.758733    5506 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:03:26.758837    5506 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/no-preload-513000/config.json ...
	I0823 12:03:26.758863    5506 cache.go:107] acquiring lock: {Name:mkd8a019f05b5c31f859ac085e05a97c2ef2a5de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:26.758875    5506 cache.go:107] acquiring lock: {Name:mk14ef687e087e3d018c03d03612817b680a4af4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:26.758931    5506 cache.go:107] acquiring lock: {Name:mkbcfc6d22278ee41b4cce4841e3a694da498bc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:26.758968    5506 cache.go:115] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0 exists
	I0823 12:03:26.758974    5506 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.0" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0" took 111.125µs
	I0823 12:03:26.758973    5506 cache.go:107] acquiring lock: {Name:mk9549ee4e302f062f08cb322beff50a1bfaddda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:26.758982    5506 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.0 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0 succeeded
	I0823 12:03:26.758983    5506 cache.go:115] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0823 12:03:26.758989    5506 cache.go:107] acquiring lock: {Name:mk11a49035bb9e2e5676ec785ce9875eff286241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:26.759000    5506 cache.go:107] acquiring lock: {Name:mkd029ec316e007a0d89190c7741da41ca847750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:26.758883    5506 cache.go:107] acquiring lock: {Name:mkbe5fdb0accb10720d891c59f16f2516569ec3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:26.759033    5506 cache.go:115] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0823 12:03:26.759038    5506 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 49.792µs
	I0823 12:03:26.759043    5506 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0823 12:03:26.758991    5506 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 133.125µs
	I0823 12:03:26.759049    5506 cache.go:115] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0 exists
	I0823 12:03:26.759057    5506 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.0" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0" took 57.958µs
	I0823 12:03:26.759061    5506 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.0 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0 succeeded
	I0823 12:03:26.759051    5506 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0823 12:03:26.759013    5506 cache.go:115] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0823 12:03:26.759077    5506 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 105.709µs
	I0823 12:03:26.759082    5506 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0823 12:03:26.759089    5506 cache.go:107] acquiring lock: {Name:mk4fcbb268cd9db9e3d5cd55562538ad1bf3a064 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:26.759109    5506 cache.go:115] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0 exists
	I0823 12:03:26.759114    5506 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.0" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0" took 244.791µs
	I0823 12:03:26.759119    5506 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.0 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0 succeeded
	I0823 12:03:26.759135    5506 cache.go:115] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0823 12:03:26.759141    5506 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 229.917µs
	I0823 12:03:26.759145    5506 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0823 12:03:26.759146    5506 cache.go:115] /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0 exists
	I0823 12:03:26.759150    5506 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.0" -> "/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0" took 69.625µs
	I0823 12:03:26.759156    5506 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.0 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0 succeeded
	I0823 12:03:26.759162    5506 cache.go:87] Successfully saved all images to host disk.
	I0823 12:03:26.759177    5506 start.go:365] acquiring machines lock for no-preload-513000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:26.759212    5506 start.go:369] acquired machines lock for "no-preload-513000" in 29.625µs
	I0823 12:03:26.759223    5506 start.go:96] Skipping create...Using existing machine configuration
	I0823 12:03:26.759228    5506 fix.go:54] fixHost starting: 
	I0823 12:03:26.759358    5506 fix.go:102] recreateIfNeeded on no-preload-513000: state=Stopped err=<nil>
	W0823 12:03:26.759367    5506 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 12:03:26.766702    5506 out.go:177] * Restarting existing qemu2 VM for "no-preload-513000" ...
	I0823 12:03:26.770766    5506 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:f9:b7:e2:88:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/disk.qcow2
	I0823 12:03:26.772672    5506 main.go:141] libmachine: STDOUT: 
	I0823 12:03:26.772689    5506 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:26.772724    5506 fix.go:56] fixHost completed within 13.48875ms
	I0823 12:03:26.772729    5506 start.go:83] releasing machines lock for "no-preload-513000", held for 13.50675ms
	W0823 12:03:26.772736    5506 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:03:26.772763    5506 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:26.772771    5506 start.go:687] Will try again in 5 seconds ...
	I0823 12:03:31.774884    5506 start.go:365] acquiring machines lock for no-preload-513000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:31.774970    5506 start.go:369] acquired machines lock for "no-preload-513000" in 60.208µs
	I0823 12:03:31.774996    5506 start.go:96] Skipping create...Using existing machine configuration
	I0823 12:03:31.775001    5506 fix.go:54] fixHost starting: 
	I0823 12:03:31.775123    5506 fix.go:102] recreateIfNeeded on no-preload-513000: state=Stopped err=<nil>
	W0823 12:03:31.775128    5506 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 12:03:31.782752    5506 out.go:177] * Restarting existing qemu2 VM for "no-preload-513000" ...
	I0823 12:03:31.790752    5506 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:f9:b7:e2:88:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/no-preload-513000/disk.qcow2
	I0823 12:03:31.792640    5506 main.go:141] libmachine: STDOUT: 
	I0823 12:03:31.792654    5506 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:31.792674    5506 fix.go:56] fixHost completed within 17.666958ms
	I0823 12:03:31.792679    5506 start.go:83] releasing machines lock for "no-preload-513000", held for 17.697667ms
	W0823 12:03:31.792729    5506 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-513000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-513000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:31.800671    5506 out.go:177] 
	W0823 12:03:31.803692    5506 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:03:31.803700    5506 out.go:239] * 
	* 
	W0823 12:03:31.804336    5506 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:03:31.814726    5506 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-513000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000: exit status 7 (30.400084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-513000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-499000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000: exit status 7 (31.958375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-499000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-499000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-499000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-499000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.321ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-499000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-499000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000: exit status 7 (28.890875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-499000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-499000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-499000 "sudo crictl images -o json": exit status 89 (38.429833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-499000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-499000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-499000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000: exit status 7 (28.70025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-499000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-499000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-499000 --alsologtostderr -v=1: exit status 89 (40.812042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-499000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:31.536655    5525 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:31.537039    5525 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:31.537043    5525 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:31.537045    5525 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:31.537169    5525 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:31.537407    5525 out.go:303] Setting JSON to false
	I0823 12:03:31.537416    5525 mustload.go:65] Loading cluster: old-k8s-version-499000
	I0823 12:03:31.537594    5525 config.go:182] Loaded profile config "old-k8s-version-499000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0823 12:03:31.541528    5525 out.go:177] * The control plane node must be running for this command
	I0823 12:03:31.545730    5525 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-499000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-499000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000: exit status 7 (28.356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-499000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000: exit status 7 (28.629583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-499000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-513000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000: exit status 7 (29.975833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-513000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-513000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-513000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-513000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.254959ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-513000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-513000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000: exit status 7 (30.824667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-513000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-513000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-513000 "sudo crictl images -o json": exit status 89 (46.757291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-513000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-513000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-513000"
start_stop_delete_test.go:304: v1.28.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.0",
- 	"registry.k8s.io/kube-controller-manager:v1.28.0",
- 	"registry.k8s.io/kube-proxy:v1.28.0",
- 	"registry.k8s.io/kube-scheduler:v1.28.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000: exit status 7 (30.721792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-513000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-598000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-598000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.0: exit status 80 (9.819835125s)

                                                
                                                
-- stdout --
	* [embed-certs-598000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-598000 in cluster embed-certs-598000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-598000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:32.031489    5559 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:32.031606    5559 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:32.031608    5559 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:32.031611    5559 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:32.031732    5559 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:32.032807    5559 out.go:303] Setting JSON to false
	I0823 12:03:32.050214    5559 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3786,"bootTime":1692813626,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:03:32.050285    5559 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:03:32.063842    5559 out.go:177] * [embed-certs-598000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:03:32.074798    5559 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:03:32.070841    5559 notify.go:220] Checking for updates...
	I0823 12:03:32.081747    5559 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:03:32.084765    5559 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:03:32.087808    5559 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:03:32.090834    5559 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:03:32.093799    5559 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:03:32.097184    5559 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:32.097248    5559 config.go:182] Loaded profile config "no-preload-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:32.097290    5559 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:03:32.102726    5559 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:03:32.108827    5559 start.go:298] selected driver: qemu2
	I0823 12:03:32.108834    5559 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:03:32.108840    5559 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:03:32.110760    5559 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 12:03:32.114728    5559 out.go:177] * Automatically selected the socket_vmnet network
	I0823 12:03:32.117895    5559 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:03:32.117920    5559 cni.go:84] Creating CNI manager for ""
	I0823 12:03:32.117927    5559 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 12:03:32.117931    5559 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 12:03:32.117937    5559 start_flags.go:319] config:
	{Name:embed-certs-598000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:embed-certs-598000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0}
	I0823 12:03:32.122335    5559 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:32.123803    5559 out.go:177] * Starting control plane node embed-certs-598000 in cluster embed-certs-598000
	I0823 12:03:32.131841    5559 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:03:32.131868    5559 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:03:32.131879    5559 cache.go:57] Caching tarball of preloaded images
	I0823 12:03:32.131978    5559 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:03:32.131983    5559 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:03:32.132070    5559 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/embed-certs-598000/config.json ...
	I0823 12:03:32.132081    5559 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/embed-certs-598000/config.json: {Name:mk0bcb80357880a8049ab210711b477ff1ff7083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:03:32.132258    5559 start.go:365] acquiring machines lock for embed-certs-598000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:32.132281    5559 start.go:369] acquired machines lock for "embed-certs-598000" in 18.125µs
	I0823 12:03:32.132290    5559 start.go:93] Provisioning new machine with config: &{Name:embed-certs-598000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.28.0 ClusterName:embed-certs-598000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:03:32.132317    5559 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:03:32.135826    5559 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 12:03:32.150099    5559 start.go:159] libmachine.API.Create for "embed-certs-598000" (driver="qemu2")
	I0823 12:03:32.150130    5559 client.go:168] LocalClient.Create starting
	I0823 12:03:32.150190    5559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:03:32.150218    5559 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:32.150228    5559 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:32.150274    5559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:03:32.150292    5559 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:32.150303    5559 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:32.150644    5559 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:03:32.315999    5559 main.go:141] libmachine: Creating SSH key...
	I0823 12:03:32.362954    5559 main.go:141] libmachine: Creating Disk image...
	I0823 12:03:32.362965    5559 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:03:32.363122    5559 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/disk.qcow2
	I0823 12:03:32.375591    5559 main.go:141] libmachine: STDOUT: 
	I0823 12:03:32.375613    5559 main.go:141] libmachine: STDERR: 
	I0823 12:03:32.375674    5559 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/disk.qcow2 +20000M
	I0823 12:03:32.383424    5559 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:03:32.383443    5559 main.go:141] libmachine: STDERR: 
	I0823 12:03:32.383466    5559 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/disk.qcow2
	I0823 12:03:32.383473    5559 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:03:32.383524    5559 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:53:0d:d1:7e:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/disk.qcow2
	I0823 12:03:32.385331    5559 main.go:141] libmachine: STDOUT: 
	I0823 12:03:32.385347    5559 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:32.385367    5559 client.go:171] LocalClient.Create took 235.159542ms
	I0823 12:03:34.388132    5559 start.go:128] duration metric: createHost completed in 2.255148958s
	I0823 12:03:34.388253    5559 start.go:83] releasing machines lock for "embed-certs-598000", held for 2.255330041s
	W0823 12:03:34.388370    5559 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:34.407562    5559 out.go:177] * Deleting "embed-certs-598000" in qemu2 ...
	W0823 12:03:34.422748    5559 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:34.422778    5559 start.go:687] Will try again in 5 seconds ...
	I0823 12:03:39.426131    5559 start.go:365] acquiring machines lock for embed-certs-598000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:39.426814    5559 start.go:369] acquired machines lock for "embed-certs-598000" in 486.167µs
	I0823 12:03:39.426987    5559 start.go:93] Provisioning new machine with config: &{Name:embed-certs-598000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.28.0 ClusterName:embed-certs-598000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:03:39.427340    5559 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:03:39.436999    5559 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 12:03:39.483926    5559 start.go:159] libmachine.API.Create for "embed-certs-598000" (driver="qemu2")
	I0823 12:03:39.483966    5559 client.go:168] LocalClient.Create starting
	I0823 12:03:39.484117    5559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:03:39.484174    5559 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:39.484194    5559 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:39.484257    5559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:03:39.484293    5559 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:39.484304    5559 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:39.484805    5559 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:03:39.617361    5559 main.go:141] libmachine: Creating SSH key...
	I0823 12:03:39.769636    5559 main.go:141] libmachine: Creating Disk image...
	I0823 12:03:39.769645    5559 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:03:39.769796    5559 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/disk.qcow2
	I0823 12:03:39.778864    5559 main.go:141] libmachine: STDOUT: 
	I0823 12:03:39.778880    5559 main.go:141] libmachine: STDERR: 
	I0823 12:03:39.778953    5559 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/disk.qcow2 +20000M
	I0823 12:03:39.786186    5559 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:03:39.786204    5559 main.go:141] libmachine: STDERR: 
	I0823 12:03:39.786217    5559 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/disk.qcow2
	I0823 12:03:39.786226    5559 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:03:39.786264    5559 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:71:53:49:d3:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/disk.qcow2
	I0823 12:03:39.787813    5559 main.go:141] libmachine: STDOUT: 
	I0823 12:03:39.787827    5559 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:39.787840    5559 client.go:171] LocalClient.Create took 303.814875ms
	I0823 12:03:41.790324    5559 start.go:128] duration metric: createHost completed in 2.362562334s
	I0823 12:03:41.790387    5559 start.go:83] releasing machines lock for "embed-certs-598000", held for 2.363154375s
	W0823 12:03:41.790890    5559 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-598000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-598000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:41.796844    5559 out.go:177] 
	W0823 12:03:41.801436    5559 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:03:41.801458    5559 out.go:239] * 
	* 
	W0823 12:03:41.804298    5559 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:03:41.815327    5559 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-598000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000: exit status 7 (62.3375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-598000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-513000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-513000 --alsologtostderr -v=1: exit status 89 (46.779791ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-513000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:32.048470    5561 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:32.048609    5561 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:32.048612    5561 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:32.048615    5561 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:32.048722    5561 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:32.048921    5561 out.go:303] Setting JSON to false
	I0823 12:03:32.048931    5561 mustload.go:65] Loading cluster: no-preload-513000
	I0823 12:03:32.049085    5561 config.go:182] Loaded profile config "no-preload-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:32.052884    5561 out.go:177] * The control plane node must be running for this command
	I0823 12:03:32.060730    5561 out.go:177]   To start a cluster, run: "minikube start -p no-preload-513000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-513000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000: exit status 7 (35.293291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-513000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000: exit status 7 (32.9515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-513000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-411000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-411000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.0: exit status 80 (11.388295208s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-411000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-411000 in cluster default-k8s-diff-port-411000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-411000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:32.804421    5605 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:32.804526    5605 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:32.804528    5605 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:32.804531    5605 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:32.804640    5605 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:32.805659    5605 out.go:303] Setting JSON to false
	I0823 12:03:32.820877    5605 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3786,"bootTime":1692813626,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:03:32.820951    5605 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:03:32.825684    5605 out.go:177] * [default-k8s-diff-port-411000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:03:32.832695    5605 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:03:32.832744    5605 notify.go:220] Checking for updates...
	I0823 12:03:32.836690    5605 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:03:32.839665    5605 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:03:32.842617    5605 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:03:32.849673    5605 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:03:32.853618    5605 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:03:32.856932    5605 config.go:182] Loaded profile config "embed-certs-598000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:32.856997    5605 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:32.857034    5605 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:03:32.861699    5605 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:03:32.868637    5605 start.go:298] selected driver: qemu2
	I0823 12:03:32.868642    5605 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:03:32.868657    5605 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:03:32.870609    5605 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 12:03:32.873666    5605 out.go:177] * Automatically selected the socket_vmnet network
	I0823 12:03:32.876801    5605 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:03:32.876830    5605 cni.go:84] Creating CNI manager for ""
	I0823 12:03:32.876838    5605 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 12:03:32.876842    5605 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 12:03:32.876848    5605 start_flags.go:319] config:
	{Name:default-k8s-diff-port-411000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:default-k8s-diff-port-411000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:03:32.881158    5605 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:32.888527    5605 out.go:177] * Starting control plane node default-k8s-diff-port-411000 in cluster default-k8s-diff-port-411000
	I0823 12:03:32.892683    5605 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:03:32.892706    5605 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:03:32.892786    5605 cache.go:57] Caching tarball of preloaded images
	I0823 12:03:32.892852    5605 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:03:32.892858    5605 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:03:32.892925    5605 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/default-k8s-diff-port-411000/config.json ...
	I0823 12:03:32.892937    5605 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/default-k8s-diff-port-411000/config.json: {Name:mk2bf4c917cae76ae9cddc2b25b6b02e7049f081 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:03:32.893157    5605 start.go:365] acquiring machines lock for default-k8s-diff-port-411000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:34.388473    5605 start.go:369] acquired machines lock for "default-k8s-diff-port-411000" in 1.49484475s
	I0823 12:03:34.388595    5605 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.0 ClusterName:default-k8s-diff-port-411000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8444 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:03:34.388911    5605 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:03:34.399587    5605 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 12:03:34.446082    5605 start.go:159] libmachine.API.Create for "default-k8s-diff-port-411000" (driver="qemu2")
	I0823 12:03:34.446132    5605 client.go:168] LocalClient.Create starting
	I0823 12:03:34.446261    5605 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:03:34.446309    5605 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:34.446335    5605 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:34.446421    5605 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:03:34.446457    5605 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:34.446469    5605 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:34.447178    5605 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:03:34.580568    5605 main.go:141] libmachine: Creating SSH key...
	I0823 12:03:34.743575    5605 main.go:141] libmachine: Creating Disk image...
	I0823 12:03:34.743581    5605 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:03:34.743722    5605 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/disk.qcow2
	I0823 12:03:34.752407    5605 main.go:141] libmachine: STDOUT: 
	I0823 12:03:34.752423    5605 main.go:141] libmachine: STDERR: 
	I0823 12:03:34.752483    5605 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/disk.qcow2 +20000M
	I0823 12:03:34.759632    5605 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:03:34.759645    5605 main.go:141] libmachine: STDERR: 
	I0823 12:03:34.759664    5605 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/disk.qcow2
	I0823 12:03:34.759672    5605 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:03:34.759703    5605 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:1c:e3:5b:4e:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/disk.qcow2
	I0823 12:03:34.761200    5605 main.go:141] libmachine: STDOUT: 
	I0823 12:03:34.761225    5605 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:34.761245    5605 client.go:171] LocalClient.Create took 315.01975ms
	I0823 12:03:36.763928    5605 start.go:128] duration metric: createHost completed in 2.374386875s
	I0823 12:03:36.764063    5605 start.go:83] releasing machines lock for "default-k8s-diff-port-411000", held for 2.374913584s
	W0823 12:03:36.764129    5605 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:36.777390    5605 out.go:177] * Deleting "default-k8s-diff-port-411000" in qemu2 ...
	W0823 12:03:36.799694    5605 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:36.799724    5605 start.go:687] Will try again in 5 seconds ...
	I0823 12:03:41.801784    5605 start.go:365] acquiring machines lock for default-k8s-diff-port-411000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:41.802235    5605 start.go:369] acquired machines lock for "default-k8s-diff-port-411000" in 376.292µs
	I0823 12:03:41.802355    5605 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.0 ClusterName:default-k8s-diff-port-411000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8444 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:03:41.802616    5605 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:03:41.815321    5605 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 12:03:41.860888    5605 start.go:159] libmachine.API.Create for "default-k8s-diff-port-411000" (driver="qemu2")
	I0823 12:03:41.860925    5605 client.go:168] LocalClient.Create starting
	I0823 12:03:41.861043    5605 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:03:41.861089    5605 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:41.861107    5605 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:41.861179    5605 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:03:41.861211    5605 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:41.861225    5605 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:41.861809    5605 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:03:41.993567    5605 main.go:141] libmachine: Creating SSH key...
	I0823 12:03:42.102914    5605 main.go:141] libmachine: Creating Disk image...
	I0823 12:03:42.102926    5605 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:03:42.103086    5605 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/disk.qcow2
	I0823 12:03:42.112012    5605 main.go:141] libmachine: STDOUT: 
	I0823 12:03:42.112030    5605 main.go:141] libmachine: STDERR: 
	I0823 12:03:42.112104    5605 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/disk.qcow2 +20000M
	I0823 12:03:42.120047    5605 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:03:42.120069    5605 main.go:141] libmachine: STDERR: 
	I0823 12:03:42.120085    5605 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/disk.qcow2
	I0823 12:03:42.120092    5605 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:03:42.120140    5605 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:bc:aa:ba:e0:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/disk.qcow2
	I0823 12:03:42.121928    5605 main.go:141] libmachine: STDOUT: 
	I0823 12:03:42.121943    5605 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:42.121952    5605 client.go:171] LocalClient.Create took 260.983917ms
	I0823 12:03:44.123252    5605 start.go:128] duration metric: createHost completed in 2.320265208s
	I0823 12:03:44.123320    5605 start.go:83] releasing machines lock for "default-k8s-diff-port-411000", held for 2.320741083s
	W0823 12:03:44.123605    5605 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:44.132240    5605 out.go:177] 
	W0823 12:03:44.143080    5605 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:03:44.143110    5605 out.go:239] * 
	* 
	W0823 12:03:44.145500    5605 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:03:44.156164    5605 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-411000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000: exit status 7 (64.040167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-411000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-598000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-598000 create -f testdata/busybox.yaml: exit status 1 (31.979ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-598000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000: exit status 7 (33.918416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-598000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000: exit status 7 (33.027166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-598000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-598000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-598000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-598000 describe deploy/metrics-server -n kube-system: exit status 1 (28.005375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-598000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-598000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000: exit status 7 (30.026166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-598000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-598000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.0
E0823 12:03:43.258796    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-598000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.0: exit status 80 (6.94855225s)

                                                
                                                
-- stdout --
	* [embed-certs-598000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-598000 in cluster embed-certs-598000
	* Restarting existing qemu2 VM for "embed-certs-598000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-598000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:42.289013    5642 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:42.289122    5642 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:42.289125    5642 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:42.289127    5642 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:42.289241    5642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:42.290178    5642 out.go:303] Setting JSON to false
	I0823 12:03:42.305616    5642 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3796,"bootTime":1692813626,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:03:42.305689    5642 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:03:42.309164    5642 out.go:177] * [embed-certs-598000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:03:42.315042    5642 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:03:42.319151    5642 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:03:42.315100    5642 notify.go:220] Checking for updates...
	I0823 12:03:42.322139    5642 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:03:42.325109    5642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:03:42.328179    5642 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:03:42.331155    5642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:03:42.332893    5642 config.go:182] Loaded profile config "embed-certs-598000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:42.333128    5642 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:03:42.337107    5642 out.go:177] * Using the qemu2 driver based on existing profile
	I0823 12:03:42.343988    5642 start.go:298] selected driver: qemu2
	I0823 12:03:42.343992    5642 start.go:902] validating driver "qemu2" against &{Name:embed-certs-598000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.0 ClusterName:embed-certs-598000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:03:42.344070    5642 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:03:42.346022    5642 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:03:42.346047    5642 cni.go:84] Creating CNI manager for ""
	I0823 12:03:42.346053    5642 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 12:03:42.346060    5642 start_flags.go:319] config:
	{Name:embed-certs-598000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:embed-certs-598000 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:03:42.349920    5642 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:42.357167    5642 out.go:177] * Starting control plane node embed-certs-598000 in cluster embed-certs-598000
	I0823 12:03:42.361104    5642 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:03:42.361132    5642 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:03:42.361145    5642 cache.go:57] Caching tarball of preloaded images
	I0823 12:03:42.361205    5642 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:03:42.361210    5642 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:03:42.361276    5642 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/embed-certs-598000/config.json ...
	I0823 12:03:42.361630    5642 start.go:365] acquiring machines lock for embed-certs-598000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:44.123495    5642 start.go:369] acquired machines lock for "embed-certs-598000" in 1.761583542s
	I0823 12:03:44.123675    5642 start.go:96] Skipping create...Using existing machine configuration
	I0823 12:03:44.123708    5642 fix.go:54] fixHost starting: 
	I0823 12:03:44.124430    5642 fix.go:102] recreateIfNeeded on embed-certs-598000: state=Stopped err=<nil>
	W0823 12:03:44.124480    5642 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 12:03:44.139219    5642 out.go:177] * Restarting existing qemu2 VM for "embed-certs-598000" ...
	I0823 12:03:44.146401    5642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:71:53:49:d3:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/disk.qcow2
	I0823 12:03:44.156274    5642 main.go:141] libmachine: STDOUT: 
	I0823 12:03:44.156390    5642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:44.156548    5642 fix.go:56] fixHost completed within 32.834208ms
	I0823 12:03:44.156572    5642 start.go:83] releasing machines lock for "embed-certs-598000", held for 33.019792ms
	W0823 12:03:44.156614    5642 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:03:44.156804    5642 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:44.156827    5642 start.go:687] Will try again in 5 seconds ...
	I0823 12:03:49.157926    5642 start.go:365] acquiring machines lock for embed-certs-598000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:49.158185    5642 start.go:369] acquired machines lock for "embed-certs-598000" in 191.5µs
	I0823 12:03:49.158271    5642 start.go:96] Skipping create...Using existing machine configuration
	I0823 12:03:49.158285    5642 fix.go:54] fixHost starting: 
	I0823 12:03:49.158785    5642 fix.go:102] recreateIfNeeded on embed-certs-598000: state=Stopped err=<nil>
	W0823 12:03:49.158801    5642 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 12:03:49.166291    5642 out.go:177] * Restarting existing qemu2 VM for "embed-certs-598000" ...
	I0823 12:03:49.170517    5642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:71:53:49:d3:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/embed-certs-598000/disk.qcow2
	I0823 12:03:49.175652    5642 main.go:141] libmachine: STDOUT: 
	I0823 12:03:49.175689    5642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:49.175745    5642 fix.go:56] fixHost completed within 17.459708ms
	I0823 12:03:49.175757    5642 start.go:83] releasing machines lock for "embed-certs-598000", held for 17.553666ms
	W0823 12:03:49.175904    5642 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-598000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-598000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:49.182383    5642 out.go:177] 
	W0823 12:03:49.186677    5642 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:03:49.186713    5642 out.go:239] * 
	* 
	W0823 12:03:49.189653    5642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:03:49.198420    5642 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-598000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000: exit status 7 (63.561083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-598000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-411000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-411000 create -f testdata/busybox.yaml: exit status 1 (29.014ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-411000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000: exit status 7 (29.109833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-411000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000: exit status 7 (28.164875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-411000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-411000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-411000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-411000 describe deploy/metrics-server -n kube-system: exit status 1 (25.909708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-411000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-411000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000: exit status 7 (28.694084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-411000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-411000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-411000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.0: exit status 80 (5.166367916s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-411000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-411000 in cluster default-k8s-diff-port-411000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-411000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-411000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:44.614750    5667 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:44.614856    5667 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:44.614859    5667 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:44.614861    5667 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:44.614987    5667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:44.615934    5667 out.go:303] Setting JSON to false
	I0823 12:03:44.630977    5667 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3798,"bootTime":1692813626,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:03:44.631042    5667 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:03:44.635364    5667 out.go:177] * [default-k8s-diff-port-411000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:03:44.641382    5667 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:03:44.645415    5667 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:03:44.641518    5667 notify.go:220] Checking for updates...
	I0823 12:03:44.651300    5667 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:03:44.654373    5667 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:03:44.657419    5667 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:03:44.658711    5667 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:03:44.666728    5667 config.go:182] Loaded profile config "default-k8s-diff-port-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:44.666968    5667 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:03:44.671395    5667 out.go:177] * Using the qemu2 driver based on existing profile
	I0823 12:03:44.678367    5667 start.go:298] selected driver: qemu2
	I0823 12:03:44.678378    5667 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.28.0 ClusterName:default-k8s-diff-port-411000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReque
sted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:03:44.678462    5667 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:03:44.680419    5667 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0823 12:03:44.680447    5667 cni.go:84] Creating CNI manager for ""
	I0823 12:03:44.680454    5667 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 12:03:44.680461    5667 start_flags.go:319] config:
	{Name:default-k8s-diff-port-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:default-k8s-diff-port-411000 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/User
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:03:44.684577    5667 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:44.687525    5667 out.go:177] * Starting control plane node default-k8s-diff-port-411000 in cluster default-k8s-diff-port-411000
	I0823 12:03:44.695359    5667 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:03:44.695386    5667 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:03:44.695405    5667 cache.go:57] Caching tarball of preloaded images
	I0823 12:03:44.695457    5667 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:03:44.695463    5667 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:03:44.695530    5667 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/default-k8s-diff-port-411000/config.json ...
	I0823 12:03:44.695906    5667 start.go:365] acquiring machines lock for default-k8s-diff-port-411000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:44.695932    5667 start.go:369] acquired machines lock for "default-k8s-diff-port-411000" in 20.417µs
	I0823 12:03:44.695942    5667 start.go:96] Skipping create...Using existing machine configuration
	I0823 12:03:44.695946    5667 fix.go:54] fixHost starting: 
	I0823 12:03:44.696059    5667 fix.go:102] recreateIfNeeded on default-k8s-diff-port-411000: state=Stopped err=<nil>
	W0823 12:03:44.696068    5667 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 12:03:44.699394    5667 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-411000" ...
	I0823 12:03:44.707460    5667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:bc:aa:ba:e0:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/disk.qcow2
	I0823 12:03:44.709520    5667 main.go:141] libmachine: STDOUT: 
	I0823 12:03:44.709538    5667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:44.709570    5667 fix.go:56] fixHost completed within 13.618667ms
	I0823 12:03:44.709575    5667 start.go:83] releasing machines lock for "default-k8s-diff-port-411000", held for 13.637541ms
	W0823 12:03:44.709582    5667 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:03:44.709627    5667 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:44.709632    5667 start.go:687] Will try again in 5 seconds ...
	I0823 12:03:49.712164    5667 start.go:365] acquiring machines lock for default-k8s-diff-port-411000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:49.712255    5667 start.go:369] acquired machines lock for "default-k8s-diff-port-411000" in 69.916µs
	I0823 12:03:49.712275    5667 start.go:96] Skipping create...Using existing machine configuration
	I0823 12:03:49.712279    5667 fix.go:54] fixHost starting: 
	I0823 12:03:49.712408    5667 fix.go:102] recreateIfNeeded on default-k8s-diff-port-411000: state=Stopped err=<nil>
	W0823 12:03:49.712412    5667 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 12:03:49.716433    5667 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-411000" ...
	I0823 12:03:49.724470    5667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:bc:aa:ba:e0:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/default-k8s-diff-port-411000/disk.qcow2
	I0823 12:03:49.726354    5667 main.go:141] libmachine: STDOUT: 
	I0823 12:03:49.726366    5667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:49.726387    5667 fix.go:56] fixHost completed within 14.106458ms
	I0823 12:03:49.726399    5667 start.go:83] releasing machines lock for "default-k8s-diff-port-411000", held for 14.138ms
	W0823 12:03:49.726457    5667 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-411000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-411000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:49.733342    5667 out.go:177] 
	W0823 12:03:49.737332    5667 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:03:49.737338    5667 out.go:239] * 
	* 
	W0823 12:03:49.737813    5667 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:03:49.747455    5667 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-411000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000: exit status 7 (30.8815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-411000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-598000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000: exit status 7 (32.792042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-598000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-598000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-598000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-598000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.317125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-598000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-598000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000: exit status 7 (27.499083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-598000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-598000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-598000 "sudo crictl images -o json": exit status 89 (37.679709ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-598000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-598000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-598000"
start_stop_delete_test.go:304: v1.28.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.0",
- 	"registry.k8s.io/kube-controller-manager:v1.28.0",
- 	"registry.k8s.io/kube-proxy:v1.28.0",
- 	"registry.k8s.io/kube-scheduler:v1.28.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000: exit status 7 (30.03975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-598000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-598000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-598000 --alsologtostderr -v=1: exit status 89 (38.591167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-598000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:49.459697    5688 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:49.459826    5688 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:49.459828    5688 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:49.459830    5688 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:49.459949    5688 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:49.460140    5688 out.go:303] Setting JSON to false
	I0823 12:03:49.460150    5688 mustload.go:65] Loading cluster: embed-certs-598000
	I0823 12:03:49.460333    5688 config.go:182] Loaded profile config "embed-certs-598000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:49.463358    5688 out.go:177] * The control plane node must be running for this command
	I0823 12:03:49.467479    5688 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-598000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-598000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000: exit status 7 (27.612916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-598000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000: exit status 7 (28.643792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-598000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-411000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000: exit status 7 (30.169458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-411000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-411000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-411000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-411000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.595542ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-411000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-411000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000: exit status 7 (30.643542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-411000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-411000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-411000 "sudo crictl images -o json": exit status 89 (47.109666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-411000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-411000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-411000"
start_stop_delete_test.go:304: v1.28.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.0",
- 	"registry.k8s.io/kube-controller-manager:v1.28.0",
- 	"registry.k8s.io/kube-proxy:v1.28.0",
- 	"registry.k8s.io/kube-scheduler:v1.28.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000: exit status 7 (30.335583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-411000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-087000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-087000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0: exit status 80 (9.710949208s)

                                                
                                                
-- stdout --
	* [newest-cni-087000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-087000 in cluster newest-cni-087000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-087000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:49.964022    5723 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:49.964146    5723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:49.964149    5723 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:49.964153    5723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:49.964275    5723 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:49.965413    5723 out.go:303] Setting JSON to false
	I0823 12:03:49.982648    5723 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3803,"bootTime":1692813626,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:03:49.982713    5723 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:03:49.991455    5723 out.go:177] * [newest-cni-087000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:03:50.002444    5723 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:03:49.998502    5723 notify.go:220] Checking for updates...
	I0823 12:03:50.009563    5723 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:03:50.012527    5723 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:03:50.015433    5723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:03:50.018452    5723 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:03:50.021498    5723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:03:50.024698    5723 config.go:182] Loaded profile config "default-k8s-diff-port-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:50.024768    5723 config.go:182] Loaded profile config "multinode-831000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:50.024821    5723 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:03:50.030411    5723 out.go:177] * Using the qemu2 driver based on user configuration
	I0823 12:03:50.036388    5723 start.go:298] selected driver: qemu2
	I0823 12:03:50.036395    5723 start.go:902] validating driver "qemu2" against <nil>
	I0823 12:03:50.036408    5723 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:03:50.038283    5723 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0823 12:03:50.038307    5723 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0823 12:03:50.041427    5723 out.go:177] * Automatically selected the socket_vmnet network
	I0823 12:03:50.048514    5723 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0823 12:03:50.048538    5723 cni.go:84] Creating CNI manager for ""
	I0823 12:03:50.048544    5723 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 12:03:50.048547    5723 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0823 12:03:50.048552    5723 start_flags.go:319] config:
	{Name:newest-cni-087000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:newest-cni-087000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:03:50.052899    5723 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:03:50.060360    5723 out.go:177] * Starting control plane node newest-cni-087000 in cluster newest-cni-087000
	I0823 12:03:50.064490    5723 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:03:50.064513    5723 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:03:50.064523    5723 cache.go:57] Caching tarball of preloaded images
	I0823 12:03:50.064594    5723 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:03:50.064599    5723 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:03:50.064668    5723 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/newest-cni-087000/config.json ...
	I0823 12:03:50.064679    5723 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/newest-cni-087000/config.json: {Name:mk208ca143716a8b4417f34b7c688df273357fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 12:03:50.064870    5723 start.go:365] acquiring machines lock for newest-cni-087000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:50.064893    5723 start.go:369] acquired machines lock for "newest-cni-087000" in 17.459µs
	I0823 12:03:50.064902    5723 start.go:93] Provisioning new machine with config: &{Name:newest-cni-087000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.28.0 ClusterName:newest-cni-087000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:03:50.064933    5723 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:03:50.069423    5723 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 12:03:50.083906    5723 start.go:159] libmachine.API.Create for "newest-cni-087000" (driver="qemu2")
	I0823 12:03:50.083934    5723 client.go:168] LocalClient.Create starting
	I0823 12:03:50.083986    5723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:03:50.084010    5723 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:50.084018    5723 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:50.084066    5723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:03:50.084084    5723 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:50.084091    5723 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:50.084421    5723 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:03:50.245156    5723 main.go:141] libmachine: Creating SSH key...
	I0823 12:03:50.294630    5723 main.go:141] libmachine: Creating Disk image...
	I0823 12:03:50.294638    5723 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:03:50.294913    5723 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/disk.qcow2
	I0823 12:03:50.304297    5723 main.go:141] libmachine: STDOUT: 
	I0823 12:03:50.304327    5723 main.go:141] libmachine: STDERR: 
	I0823 12:03:50.304399    5723 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/disk.qcow2 +20000M
	I0823 12:03:50.312486    5723 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:03:50.312503    5723 main.go:141] libmachine: STDERR: 
	I0823 12:03:50.312535    5723 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/disk.qcow2
	I0823 12:03:50.312546    5723 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:03:50.312596    5723 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:a2:f6:92:48:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/disk.qcow2
	I0823 12:03:50.314408    5723 main.go:141] libmachine: STDOUT: 
	I0823 12:03:50.314422    5723 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:50.314445    5723 client.go:171] LocalClient.Create took 230.48625ms
	I0823 12:03:52.316826    5723 start.go:128] duration metric: createHost completed in 2.251702333s
	I0823 12:03:52.316911    5723 start.go:83] releasing machines lock for "newest-cni-087000", held for 2.251849792s
	W0823 12:03:52.317016    5723 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:52.328222    5723 out.go:177] * Deleting "newest-cni-087000" in qemu2 ...
	W0823 12:03:52.349560    5723 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:52.349586    5723 start.go:687] Will try again in 5 seconds ...
	I0823 12:03:57.352108    5723 start.go:365] acquiring machines lock for newest-cni-087000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:03:57.352677    5723 start.go:369] acquired machines lock for "newest-cni-087000" in 454.5µs
	I0823 12:03:57.352816    5723 start.go:93] Provisioning new machine with config: &{Name:newest-cni-087000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.28.0 ClusterName:newest-cni-087000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0823 12:03:57.353158    5723 start.go:125] createHost starting for "" (driver="qemu2")
	I0823 12:03:57.362757    5723 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0823 12:03:57.411483    5723 start.go:159] libmachine.API.Create for "newest-cni-087000" (driver="qemu2")
	I0823 12:03:57.411527    5723 client.go:168] LocalClient.Create starting
	I0823 12:03:57.411648    5723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/ca.pem
	I0823 12:03:57.411697    5723 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:57.411719    5723 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:57.411792    5723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17086-1133/.minikube/certs/cert.pem
	I0823 12:03:57.411827    5723 main.go:141] libmachine: Decoding PEM data...
	I0823 12:03:57.411848    5723 main.go:141] libmachine: Parsing certificate...
	I0823 12:03:57.412413    5723 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0823 12:03:57.540159    5723 main.go:141] libmachine: Creating SSH key...
	I0823 12:03:57.588534    5723 main.go:141] libmachine: Creating Disk image...
	I0823 12:03:57.588539    5723 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0823 12:03:57.588685    5723 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/disk.qcow2.raw /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/disk.qcow2
	I0823 12:03:57.597244    5723 main.go:141] libmachine: STDOUT: 
	I0823 12:03:57.597262    5723 main.go:141] libmachine: STDERR: 
	I0823 12:03:57.597329    5723 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/disk.qcow2 +20000M
	I0823 12:03:57.604529    5723 main.go:141] libmachine: STDOUT: Image resized.
	
	I0823 12:03:57.604545    5723 main.go:141] libmachine: STDERR: 
	I0823 12:03:57.604558    5723 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/disk.qcow2
	I0823 12:03:57.604567    5723 main.go:141] libmachine: Starting QEMU VM...
	I0823 12:03:57.604622    5723 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:7e:c6:37:93:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/disk.qcow2
	I0823 12:03:57.606153    5723 main.go:141] libmachine: STDOUT: 
	I0823 12:03:57.606168    5723 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:03:57.606180    5723 client.go:171] LocalClient.Create took 194.641542ms
	I0823 12:03:59.608407    5723 start.go:128] duration metric: createHost completed in 2.255147916s
	I0823 12:03:59.608466    5723 start.go:83] releasing machines lock for "newest-cni-087000", held for 2.255689833s
	W0823 12:03:59.608912    5723 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-087000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-087000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:03:59.617526    5723 out.go:177] 
	W0823 12:03:59.621612    5723 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:03:59.621649    5723 out.go:239] * 
	* 
	W0823 12:03:59.624203    5723 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:03:59.634455    5723 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-087000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-087000 -n newest-cni-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-087000 -n newest-cni-087000: exit status 7 (66.118541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-087000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-411000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-411000 --alsologtostderr -v=1: exit status 89 (43.079542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-411000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:49.980474    5727 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:49.980621    5727 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:49.980624    5727 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:49.980627    5727 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:49.980733    5727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:49.980941    5727 out.go:303] Setting JSON to false
	I0823 12:03:49.980951    5727 mustload.go:65] Loading cluster: default-k8s-diff-port-411000
	I0823 12:03:49.981146    5727 config.go:182] Loaded profile config "default-k8s-diff-port-411000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:03:49.985437    5727 out.go:177] * The control plane node must be running for this command
	I0823 12:03:49.991438    5727 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-411000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-411000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000: exit status 7 (34.4675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-411000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000: exit status 7 (29.732666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-411000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-087000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-087000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0: exit status 80 (5.179902916s)

                                                
                                                
-- stdout --
	* [newest-cni-087000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-087000 in cluster newest-cni-087000
	* Restarting existing qemu2 VM for "newest-cni-087000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-087000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:03:59.964809    5773 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:03:59.964925    5773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:59.964927    5773 out.go:309] Setting ErrFile to fd 2...
	I0823 12:03:59.964930    5773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:03:59.965044    5773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:03:59.966024    5773 out.go:303] Setting JSON to false
	I0823 12:03:59.981047    5773 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3813,"bootTime":1692813626,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 12:03:59.981105    5773 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 12:03:59.985088    5773 out.go:177] * [newest-cni-087000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 12:03:59.992069    5773 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 12:03:59.996115    5773 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 12:03:59.992139    5773 notify.go:220] Checking for updates...
	I0823 12:04:00.002064    5773 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 12:04:00.005091    5773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 12:04:00.008103    5773 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 12:04:00.011002    5773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 12:04:00.014776    5773 config.go:182] Loaded profile config "newest-cni-087000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:04:00.015230    5773 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 12:04:00.019124    5773 out.go:177] * Using the qemu2 driver based on existing profile
	I0823 12:04:00.026119    5773 start.go:298] selected driver: qemu2
	I0823 12:04:00.026124    5773 start.go:902] validating driver "qemu2" against &{Name:newest-cni-087000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.0 ClusterName:newest-cni-087000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:
[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:04:00.026188    5773 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 12:04:00.028297    5773 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0823 12:04:00.028323    5773 cni.go:84] Creating CNI manager for ""
	I0823 12:04:00.028330    5773 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 12:04:00.028337    5773 start_flags.go:319] config:
	{Name:newest-cni-087000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:newest-cni-087000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 12:04:00.032503    5773 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 12:04:00.040057    5773 out.go:177] * Starting control plane node newest-cni-087000 in cluster newest-cni-087000
	I0823 12:04:00.044040    5773 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 12:04:00.044062    5773 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 12:04:00.044075    5773 cache.go:57] Caching tarball of preloaded images
	I0823 12:04:00.044133    5773 preload.go:174] Found /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0823 12:04:00.044138    5773 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0 on docker
	I0823 12:04:00.044209    5773 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/newest-cni-087000/config.json ...
	I0823 12:04:00.044558    5773 start.go:365] acquiring machines lock for newest-cni-087000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:04:00.044584    5773 start.go:369] acquired machines lock for "newest-cni-087000" in 19.584µs
	I0823 12:04:00.044593    5773 start.go:96] Skipping create...Using existing machine configuration
	I0823 12:04:00.044596    5773 fix.go:54] fixHost starting: 
	I0823 12:04:00.044716    5773 fix.go:102] recreateIfNeeded on newest-cni-087000: state=Stopped err=<nil>
	W0823 12:04:00.044726    5773 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 12:04:00.049101    5773 out.go:177] * Restarting existing qemu2 VM for "newest-cni-087000" ...
	I0823 12:04:00.056919    5773 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:7e:c6:37:93:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/disk.qcow2
	I0823 12:04:00.058686    5773 main.go:141] libmachine: STDOUT: 
	I0823 12:04:00.058702    5773 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:04:00.058734    5773 fix.go:56] fixHost completed within 14.134209ms
	I0823 12:04:00.058739    5773 start.go:83] releasing machines lock for "newest-cni-087000", held for 14.151ms
	W0823 12:04:00.058745    5773 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:04:00.058778    5773 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:04:00.058782    5773 start.go:687] Will try again in 5 seconds ...
	I0823 12:04:05.061108    5773 start.go:365] acquiring machines lock for newest-cni-087000: {Name:mk16df683adb2d5b3ba189b58314d26afcd5a22f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0823 12:04:05.061462    5773 start.go:369] acquired machines lock for "newest-cni-087000" in 258.625µs
	I0823 12:04:05.061601    5773 start.go:96] Skipping create...Using existing machine configuration
	I0823 12:04:05.061624    5773 fix.go:54] fixHost starting: 
	I0823 12:04:05.062372    5773 fix.go:102] recreateIfNeeded on newest-cni-087000: state=Stopped err=<nil>
	W0823 12:04:05.062398    5773 fix.go:128] unexpected machine state, will restart: <nil>
	I0823 12:04:05.066297    5773 out.go:177] * Restarting existing qemu2 VM for "newest-cni-087000" ...
	I0823 12:04:05.073450    5773 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:7e:c6:37:93:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17086-1133/.minikube/machines/newest-cni-087000/disk.qcow2
	I0823 12:04:05.082712    5773 main.go:141] libmachine: STDOUT: 
	I0823 12:04:05.082776    5773 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0823 12:04:05.082878    5773 fix.go:56] fixHost completed within 21.253833ms
	I0823 12:04:05.082904    5773 start.go:83] releasing machines lock for "newest-cni-087000", held for 21.419875ms
	W0823 12:04:05.083149    5773 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-087000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-087000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0823 12:04:05.090253    5773 out.go:177] 
	W0823 12:04:05.094388    5773 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0823 12:04:05.094412    5773 out.go:239] * 
	* 
	W0823 12:04:05.096842    5773 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 12:04:05.104209    5773 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-087000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-087000 -n newest-cni-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-087000 -n newest-cni-087000: exit status 7 (69.314459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-087000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-087000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-087000 "sudo crictl images -o json": exit status 89 (45.114917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-087000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-087000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-087000"
start_stop_delete_test.go:304: v1.28.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.0",
- 	"registry.k8s.io/kube-controller-manager:v1.28.0",
- 	"registry.k8s.io/kube-proxy:v1.28.0",
- 	"registry.k8s.io/kube-scheduler:v1.28.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-087000 -n newest-cni-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-087000 -n newest-cni-087000: exit status 7 (30.28775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-087000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-087000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-087000 --alsologtostderr -v=1: exit status 89 (40.542583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-087000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 12:04:05.292248    5788 out.go:296] Setting OutFile to fd 1 ...
	I0823 12:04:05.292402    5788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:04:05.292405    5788 out.go:309] Setting ErrFile to fd 2...
	I0823 12:04:05.292407    5788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 12:04:05.292511    5788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 12:04:05.292701    5788 out.go:303] Setting JSON to false
	I0823 12:04:05.292710    5788 mustload.go:65] Loading cluster: newest-cni-087000
	I0823 12:04:05.292898    5788 config.go:182] Loaded profile config "newest-cni-087000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 12:04:05.296875    5788 out.go:177] * The control plane node must be running for this command
	I0823 12:04:05.299887    5788 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-087000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-087000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-087000 -n newest-cni-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-087000 -n newest-cni-087000: exit status 7 (29.400417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-087000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-087000 -n newest-cni-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-087000 -n newest-cni-087000: exit status 7 (29.288667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-087000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (141/255)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.0/json-events 13.37
11 TestDownloadOnly/v1.28.0/preload-exists 0
14 TestDownloadOnly/v1.28.0/kubectl 0
15 TestDownloadOnly/v1.28.0/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.26
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
19 TestBinaryMirror 0.37
22 TestAddons/Setup 401.91
26 TestAddons/parallel/InspektorGadget 10.21
31 TestAddons/parallel/Headlamp 11.34
35 TestAddons/serial/GCPAuth/Namespaces 0.07
36 TestAddons/StoppedEnableDisable 12.27
44 TestHyperKitDriverInstallOrUpdate 8.13
47 TestErrorSpam/setup 28.5
48 TestErrorSpam/start 0.31
49 TestErrorSpam/status 0.27
50 TestErrorSpam/pause 0.68
51 TestErrorSpam/unpause 0.66
52 TestErrorSpam/stop 12.24
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 45.92
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 34.4
59 TestFunctional/serial/KubeContext 0.03
60 TestFunctional/serial/KubectlGetPods 0.04
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.74
64 TestFunctional/serial/CacheCmd/cache/add_local 1.41
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.03
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.94
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
70 TestFunctional/serial/MinikubeKubectlCmd 0.4
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.54
72 TestFunctional/serial/ExtraConfig 35.68
73 TestFunctional/serial/ComponentHealth 0.04
74 TestFunctional/serial/LogsCmd 0.66
75 TestFunctional/serial/LogsFileCmd 0.64
76 TestFunctional/serial/InvalidService 4.43
78 TestFunctional/parallel/ConfigCmd 0.21
79 TestFunctional/parallel/DashboardCmd 12.88
80 TestFunctional/parallel/DryRun 0.22
81 TestFunctional/parallel/InternationalLanguage 0.11
82 TestFunctional/parallel/StatusCmd 0.26
87 TestFunctional/parallel/AddonsCmd 0.12
88 TestFunctional/parallel/PersistentVolumeClaim 24.15
90 TestFunctional/parallel/SSHCmd 0.14
91 TestFunctional/parallel/CpCmd 0.3
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.44
98 TestFunctional/parallel/NodeLabels 0.08
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
102 TestFunctional/parallel/License 0.62
103 TestFunctional/parallel/Version/short 0.04
104 TestFunctional/parallel/Version/components 0.24
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.09
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
109 TestFunctional/parallel/ImageCommands/ImageBuild 2.1
110 TestFunctional/parallel/ImageCommands/Setup 1.93
111 TestFunctional/parallel/DockerEnv/bash 0.44
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
115 TestFunctional/parallel/ServiceCmd/DeployApp 11.1
116 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.23
117 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.61
118 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.9
119 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
120 TestFunctional/parallel/ImageCommands/ImageRemove 0.17
121 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.59
122 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.12
128 TestFunctional/parallel/ServiceCmd/List 0.1
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.1
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
131 TestFunctional/parallel/ServiceCmd/Format 0.11
132 TestFunctional/parallel/ServiceCmd/URL 0.11
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
137 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.19
140 TestFunctional/parallel/ProfileCmd/profile_list 0.15
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
142 TestFunctional/parallel/MountCmd/any-port 5.19
143 TestFunctional/parallel/MountCmd/specific-port 0.86
145 TestFunctional/delete_addon-resizer_images 0.12
146 TestFunctional/delete_my-image_image 0.04
147 TestFunctional/delete_minikube_cached_images 0.04
151 TestImageBuild/serial/Setup 28.59
152 TestImageBuild/serial/NormalBuild 1.72
154 TestImageBuild/serial/BuildWithDockerIgnore 0.11
155 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.1
158 TestIngressAddonLegacy/StartLegacyK8sCluster 78.19
160 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.34
161 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.23
165 TestJSONOutput/start/Command 46.89
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/pause/Command 0.28
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/unpause/Command 0.24
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 9.08
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.34
193 TestMainNoArgs 0.03
194 TestMinikubeProfile 63.94
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
255 TestNoKubernetes/serial/ProfileList 0.14
256 TestNoKubernetes/serial/Stop 0.06
258 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
276 TestStartStop/group/old-k8s-version/serial/Stop 0.06
277 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
281 TestStartStop/group/no-preload/serial/Stop 0.06
282 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
298 TestStartStop/group/embed-certs/serial/Stop 0.06
299 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
303 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
304 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
316 TestStartStop/group/newest-cni/serial/DeployApp 0
317 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
318 TestStartStop/group/newest-cni/serial/Stop 0.07
319 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
321 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-861000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-861000: exit status 85 (90.815667ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:11 PDT |          |
	|         | -p download-only-861000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/23 11:11:21
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0823 11:11:21.954622    1556 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:11:21.954763    1556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:11:21.954767    1556 out.go:309] Setting ErrFile to fd 2...
	I0823 11:11:21.954769    1556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:11:21.954879    1556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	W0823 11:11:21.954939    1556 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17086-1133/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17086-1133/.minikube/config/config.json: no such file or directory
	I0823 11:11:21.956120    1556 out.go:303] Setting JSON to true
	I0823 11:11:21.972641    1556 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":654,"bootTime":1692813627,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:11:21.972697    1556 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:11:21.978077    1556 out.go:97] [download-only-861000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:11:21.981087    1556 out.go:169] MINIKUBE_LOCATION=17086
	I0823 11:11:21.978236    1556 notify.go:220] Checking for updates...
	W0823 11:11:21.978268    1556 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball: no such file or directory
	I0823 11:11:21.988000    1556 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:11:21.991097    1556 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:11:21.994075    1556 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:11:21.997043    1556 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	W0823 11:11:22.003026    1556 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0823 11:11:22.003219    1556 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:11:22.008076    1556 out.go:97] Using the qemu2 driver based on user configuration
	I0823 11:11:22.008097    1556 start.go:298] selected driver: qemu2
	I0823 11:11:22.008101    1556 start.go:902] validating driver "qemu2" against <nil>
	I0823 11:11:22.008160    1556 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0823 11:11:22.011037    1556 out.go:169] Automatically selected the socket_vmnet network
	I0823 11:11:22.014809    1556 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0823 11:11:22.014880    1556 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0823 11:11:22.014936    1556 cni.go:84] Creating CNI manager for ""
	I0823 11:11:22.014953    1556 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0823 11:11:22.014958    1556 start_flags.go:319] config:
	{Name:download-only-861000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-861000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:11:22.020621    1556 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:11:22.024167    1556 out.go:97] Downloading VM boot image ...
	I0823 11:11:22.024198    1556 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso
	E0823 11:11:22.440748    1556 iso.go:90] Unable to download https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso: getter: &{Ctx:context.Background Src:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso.sha256 Dst:/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso.download Pwd: Mode:2 Umask:---------- Detectors:[0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8] Decompressors:map[bz2:0x1400053dd40 gz:0x1400053dd48 tar:0x1400053dcf0 tar.bz2:0x1400053dd00 tar.gz:0x1400053dd10 tar.xz:0x1400053dd20 tar.zst:0x1400053dd30 tbz2:0x1400053dd00 tgz:0x1400053dd10 txz:0x1400053dd20 tzst:0x1400053dd30 xz:0x1400053dd50 zip:0x1400053dd60 zst:0x1400053dd58] Getters:map[file:0x14000fe1480 http:0x14000da9b80 https:0x14000da9bd0] Dir:false ProgressListener:<nil> Insecure:false Di
sableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	I0823 11:11:22.440917    1556 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:11:22.446466    1556 out.go:97] Downloading VM boot image ...
	I0823 11:11:22.446715    1556 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-arm64.iso?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso
	I0823 11:11:30.978336    1556 out.go:97] Starting control plane node download-only-861000 in cluster download-only-861000
	I0823 11:11:30.978364    1556 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0823 11:11:31.073961    1556 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0823 11:11:31.074044    1556 cache.go:57] Caching tarball of preloaded images
	I0823 11:11:31.074248    1556 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0823 11:11:31.079365    1556 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0823 11:11:31.079374    1556 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0823 11:11:31.294710    1556 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0823 11:11:45.710738    1556 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0823 11:11:45.710887    1556 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0823 11:11:46.353227    1556 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0823 11:11:46.353431    1556 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/download-only-861000/config.json ...
	I0823 11:11:46.353453    1556 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/download-only-861000/config.json: {Name:mka0c26ae17d68b5ee8df37dd23c2ce5da5a439c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0823 11:11:46.353679    1556 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0823 11:11:46.353906    1556 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0823 11:11:46.715567    1556 out.go:169] 
	W0823 11:11:46.719511    1556 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17086-1133/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8 0x106a6c5b8] Decompressors:map[bz2:0x1400053dd40 gz:0x1400053dd48 tar:0x1400053dcf0 tar.bz2:0x1400053dd00 tar.gz:0x1400053dd10 tar.xz:0x1400053dd20 tar.zst:0x1400053dd30 tbz2:0x1400053dd00 tgz:0x1400053dd10 txz:0x1400053dd20 tzst:0x1400053dd30 xz:0x1400053dd50 zip:0x1400053dd60 zst:0x1400053dd58] Getters:map[file:0x140010466a0 http:0x14001084140 https:0x14001084190] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0823 11:11:46.719546    1556 out_reason.go:110] 
	W0823 11:11:46.726507    1556 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0823 11:11:46.731514    1556 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-861000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/json-events (13.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-861000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-861000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=qemu2 : (13.371217875s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (13.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
--- PASS: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-861000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-861000: exit status 85 (93.52875ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:11 PDT |          |
	|         | -p download-only-861000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-861000 | jenkins | v1.31.2 | 23 Aug 23 11:11 PDT |          |
	|         | -p download-only-861000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/23 11:11:46
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0823 11:11:46.912837    1572 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:11:46.912977    1572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:11:46.912980    1572 out.go:309] Setting ErrFile to fd 2...
	I0823 11:11:46.912982    1572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:11:46.913096    1572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	W0823 11:11:46.913157    1572 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17086-1133/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17086-1133/.minikube/config/config.json: no such file or directory
	I0823 11:11:46.914033    1572 out.go:303] Setting JSON to true
	I0823 11:11:46.929058    1572 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":679,"bootTime":1692813627,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:11:46.929163    1572 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:11:46.933680    1572 out.go:97] [download-only-861000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:11:46.936602    1572 out.go:169] MINIKUBE_LOCATION=17086
	I0823 11:11:46.933797    1572 notify.go:220] Checking for updates...
	I0823 11:11:46.943626    1572 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:11:46.947547    1572 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:11:46.950591    1572 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:11:46.953596    1572 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	W0823 11:11:46.959571    1572 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0823 11:11:46.959889    1572 config.go:182] Loaded profile config "download-only-861000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0823 11:11:46.959926    1572 start.go:810] api.Load failed for download-only-861000: filestore "download-only-861000": Docker machine "download-only-861000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0823 11:11:46.959975    1572 driver.go:373] Setting default libvirt URI to qemu:///system
	W0823 11:11:46.959991    1572 start.go:810] api.Load failed for download-only-861000: filestore "download-only-861000": Docker machine "download-only-861000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0823 11:11:46.963595    1572 out.go:97] Using the qemu2 driver based on existing profile
	I0823 11:11:46.963605    1572 start.go:298] selected driver: qemu2
	I0823 11:11:46.963607    1572 start.go:902] validating driver "qemu2" against &{Name:download-only-861000 KeepContext:false EmbedCerts:false MinikubeISO:https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.16.0 ClusterName:download-only-861000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:11:46.965598    1572 cni.go:84] Creating CNI manager for ""
	I0823 11:11:46.965612    1572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0823 11:11:46.965620    1572 start_flags.go:319] config:
	{Name:download-only-861000 KeepContext:false EmbedCerts:false MinikubeISO:https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-861000 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:11:46.969598    1572 iso.go:125] acquiring lock: {Name:mka776653e1638e5290a780703ca95dff1deb1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0823 11:11:46.972592    1572 out.go:97] Starting control plane node download-only-861000 in cluster download-only-861000
	I0823 11:11:46.972600    1572 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:11:47.201635    1572 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0823 11:11:47.201730    1572 cache.go:57] Caching tarball of preloaded images
	I0823 11:11:47.202485    1572 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0823 11:11:47.206664    1572 out.go:97] Downloading Kubernetes v1.28.0 preload ...
	I0823 11:11:47.206687    1572 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 ...
	I0823 11:11:47.421175    1572 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4?checksum=md5:002a73d62a3b066a08573cf3da2c8cb4 -> /Users/jenkins/minikube-integration/17086-1133/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-861000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-861000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.37s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-536000 --alsologtostderr --binary-mirror http://127.0.0.1:49317 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-536000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-536000
--- PASS: TestBinaryMirror (0.37s)

                                                
                                    
x
+
TestAddons/Setup (401.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-603000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-darwin-arm64 start -p addons-603000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: (6m41.910336417s)
--- PASS: TestAddons/Setup (401.91s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.21s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ndv9t" [d059753c-0bc7-41e8-8616-dcfd9769bf59] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007260833s
addons_test.go:817: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-603000
addons_test.go:817: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-603000: (5.202962292s)
--- PASS: TestAddons/parallel/InspektorGadget (10.21s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-603000 --alsologtostderr -v=1
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-fhmj2" [c34b494a-df64-4fd0-9ff9-0b52399c8158] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-fhmj2" [c34b494a-df64-4fd0-9ff9-0b52399c8158] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.009604917s
--- PASS: TestAddons/parallel/Headlamp (11.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-603000 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-603000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-603000
addons_test.go:148: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-603000: (12.080447125s)
addons_test.go:152: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-603000
addons_test.go:156: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-603000
addons_test.go:161: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-603000
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.13s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.13s)

                                                
                                    
x
+
TestErrorSpam/setup (28.5s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-410000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-410000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 --driver=qemu2 : (28.498083542s)
--- PASS: TestErrorSpam/setup (28.50s)

                                                
                                    
x
+
TestErrorSpam/start (0.31s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 start --dry-run
--- PASS: TestErrorSpam/start (0.31s)

                                                
                                    
x
+
TestErrorSpam/status (0.27s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 status
--- PASS: TestErrorSpam/status (0.27s)

                                                
                                    
x
+
TestErrorSpam/pause (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 pause
--- PASS: TestErrorSpam/pause (0.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 unpause
--- PASS: TestErrorSpam/unpause (0.66s)

                                                
                                    
x
+
TestErrorSpam/stop (12.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 stop: (12.07900975s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-410000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-410000 stop
--- PASS: TestErrorSpam/stop (12.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17086-1133/.minikube/files/etc/test/nested/copy/1554/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-840000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-840000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (45.917936083s)
--- PASS: TestFunctional/serial/StartWithProxy (45.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.4s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-840000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-840000 --alsologtostderr -v=8: (34.399175416s)
functional_test.go:659: soft start took 34.399710833s for "functional-840000" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.40s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-840000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-840000 cache add registry.k8s.io/pause:3.1: (1.390214083s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-840000 cache add registry.k8s.io/pause:3.3: (1.26021775s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-840000 cache add registry.k8s.io/pause:latest: (1.093865208s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-840000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local3335139553/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 cache add minikube-local-cache-test:functional-840000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-arm64 -p functional-840000 cache add minikube-local-cache-test:functional-840000: (1.039142916s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 cache delete minikube-local-cache-test:functional-840000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-840000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-840000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (75.617833ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 kubectl -- --context functional-840000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.40s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-840000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.68s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-840000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-840000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.682000125s)
functional_test.go:757: restart took 35.68210675s for "functional-840000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.68s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-840000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1311624107/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-840000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-840000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-840000: exit status 115 (115.506208ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32038 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-840000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-840000 delete -f testdata/invalidsvc.yaml: (1.162978875s)
--- PASS: TestFunctional/serial/InvalidService (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-840000 config get cpus: exit status 14 (28.885542ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-840000 config get cpus: exit status 14 (28.798042ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-840000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-840000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3065: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.88s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-840000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-840000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.538291ms)

                                                
                                                
-- stdout --
	* [functional-840000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:48:34.660396    3044 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:48:34.660532    3044 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:48:34.660535    3044 out.go:309] Setting ErrFile to fd 2...
	I0823 11:48:34.660538    3044 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:48:34.660681    3044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:48:34.661704    3044 out.go:303] Setting JSON to false
	I0823 11:48:34.678520    3044 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2888,"bootTime":1692813626,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:48:34.678582    3044 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:48:34.682634    3044 out.go:177] * [functional-840000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0823 11:48:34.690562    3044 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:48:34.694466    3044 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:48:34.690574    3044 notify.go:220] Checking for updates...
	I0823 11:48:34.700526    3044 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:48:34.703404    3044 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:48:34.706470    3044 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:48:34.709532    3044 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:48:34.711257    3044 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:48:34.711477    3044 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:48:34.715595    3044 out.go:177] * Using the qemu2 driver based on existing profile
	I0823 11:48:34.722323    3044 start.go:298] selected driver: qemu2
	I0823 11:48:34.722327    3044 start.go:902] validating driver "qemu2" against &{Name:functional-840000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.0 ClusterName:functional-840000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:48:34.722373    3044 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:48:34.728542    3044 out.go:177] 
	W0823 11:48:34.732559    3044 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0823 11:48:34.736472    3044 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-840000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-840000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-840000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.919875ms)

                                                
                                                
-- stdout --
	* [functional-840000] minikube v1.31.2 sur Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0823 11:48:34.872199    3055 out.go:296] Setting OutFile to fd 1 ...
	I0823 11:48:34.872306    3055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:48:34.872308    3055 out.go:309] Setting ErrFile to fd 2...
	I0823 11:48:34.872310    3055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0823 11:48:34.872434    3055 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
	I0823 11:48:34.873772    3055 out.go:303] Setting JSON to false
	I0823 11:48:34.890488    3055 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2888,"bootTime":1692813626,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0823 11:48:34.890564    3055 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0823 11:48:34.895438    3055 out.go:177] * [functional-840000] minikube v1.31.2 sur Darwin 13.5.1 (arm64)
	I0823 11:48:34.902520    3055 out.go:177]   - MINIKUBE_LOCATION=17086
	I0823 11:48:34.902555    3055 notify.go:220] Checking for updates...
	I0823 11:48:34.909495    3055 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	I0823 11:48:34.912510    3055 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0823 11:48:34.915559    3055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0823 11:48:34.918485    3055 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	I0823 11:48:34.921511    3055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0823 11:48:34.924753    3055 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0823 11:48:34.924977    3055 driver.go:373] Setting default libvirt URI to qemu:///system
	I0823 11:48:34.929477    3055 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0823 11:48:34.936490    3055 start.go:298] selected driver: qemu2
	I0823 11:48:34.936494    3055 start.go:902] validating driver "qemu2" against &{Name:functional-840000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.0 ClusterName:functional-840000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0823 11:48:34.936539    3055 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0823 11:48:34.942524    3055 out.go:177] 
	W0823 11:48:34.946520    3055 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0823 11:48:34.950417    3055 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2195357e-5966-4b7a-bf10-519f98066ad5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006623709s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-840000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-840000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-840000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-840000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [47832e82-c0be-40e6-acc0-fe611924a4c0] Pending
helpers_test.go:344: "sp-pod" [47832e82-c0be-40e6-acc0-fe611924a4c0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [47832e82-c0be-40e6-acc0-fe611924a4c0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.008885917s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-840000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-840000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-840000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [43323487-aecf-4506-a291-914cc6114d81] Pending
helpers_test.go:344: "sp-pod" [43323487-aecf-4506-a291-914cc6114d81] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [43323487-aecf-4506-a291-914cc6114d81] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.008666959s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-840000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh -n functional-840000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 cp functional-840000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1241252548/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh -n functional-840000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1554/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "sudo cat /etc/test/nested/copy/1554/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1554.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "sudo cat /etc/ssl/certs/1554.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1554.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "sudo cat /usr/share/ca-certificates/1554.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/15542.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "sudo cat /etc/ssl/certs/15542.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/15542.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "sudo cat /usr/share/ca-certificates/15542.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-840000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-840000 ssh "sudo systemctl is-active crio": exit status 1 (67.792833ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-840000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.0
registry.k8s.io/kube-proxy:v1.28.0
registry.k8s.io/kube-controller-manager:v1.28.0
registry.k8s.io/kube-apiserver:v1.28.0
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-840000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-840000
docker.io/kubernetesui/metrics-scraper:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-840000 image ls --format short --alsologtostderr:
I0823 11:48:41.552161    3083 out.go:296] Setting OutFile to fd 1 ...
I0823 11:48:41.553111    3083 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 11:48:41.553115    3083 out.go:309] Setting ErrFile to fd 2...
I0823 11:48:41.553117    3083 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 11:48:41.553239    3083 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
I0823 11:48:41.553639    3083 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
I0823 11:48:41.553735    3083 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
I0823 11:48:41.554526    3083 ssh_runner.go:195] Run: systemctl --version
I0823 11:48:41.554536    3083 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/functional-840000/id_rsa Username:docker}
I0823 11:48:41.589736    3083 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image ls --format table --alsologtostderr
E0823 11:48:43.901369    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-840000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/google-containers/addon-resizer      | functional-840000 | ffd4cfbbe753e | 32.9MB |
| docker.io/localhost/my-image                | functional-840000 | 7109e93aef1dd | 1.41MB |
| registry.k8s.io/kube-apiserver              | v1.28.0           | 00543d2fe5d71 | 119MB  |
| registry.k8s.io/kube-scheduler              | v1.28.0           | 762dce4090c5f | 57.8MB |
| registry.k8s.io/kube-controller-manager     | v1.28.0           | 46cc66ccc7c19 | 116MB  |
| registry.k8s.io/kube-proxy                  | v1.28.0           | 940f54a5bcae9 | 68.3MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/minikube-local-cache-test | functional-840000 | afb881c2ee7e7 | 30B    |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| gcr.io/k8s-minikube/busybox                 | latest            | 71a676dd070f4 | 1.41MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | alpine            | 397432849901d | 43.4MB |
| docker.io/library/nginx                     | latest            | ab73c7fd67234 | 192MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-840000 image ls --format table --alsologtostderr:
I0823 11:48:43.902941    3097 out.go:296] Setting OutFile to fd 1 ...
I0823 11:48:43.903077    3097 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 11:48:43.903080    3097 out.go:309] Setting ErrFile to fd 2...
I0823 11:48:43.903082    3097 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 11:48:43.903202    3097 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
I0823 11:48:43.903574    3097 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
I0823 11:48:43.903631    3097 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
I0823 11:48:43.904511    3097 ssh_runner.go:195] Run: systemctl --version
I0823 11:48:43.904528    3097 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/functional-840000/id_rsa Username:docker}
I0823 11:48:43.940702    3097 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E0823 11:48:44.543541    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
E0823 11:48:45.825692    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
2023/08/23 11:48:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-840000 image ls --format json --alsologtostderr:
[{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"7109e93aef1ddf904ef4454e36430f58806276141e277f3d74acd3cc2a132fb7","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-840000"],"size":"1410000"},{"id":"940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.0"],"size":"68300000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430b
c550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"397432849901d4b78b8fda5db7d50e074ac273977a4a78ce47ad069d4a15e091","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.0"],"size":"116000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-840000"],"size":"32900000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","re
poDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1410000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"afb881c2ee7e7b0365a22cd32f1ed4f3addb766f5a1f17f463e2bc014b2ab52f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-840000"],"size":"30"},{"id":"ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac","repoDigests":[],"repoTags":["docker.io/library/nginx:la
test"],"size":"192000000"},{"id":"00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.0"],"size":"119000000"},{"id":"762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.0"],"size":"57800000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-840000 image ls --format json --alsologtostderr:
I0823 11:48:43.822308    3095 out.go:296] Setting OutFile to fd 1 ...
I0823 11:48:43.822429    3095 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 11:48:43.822432    3095 out.go:309] Setting ErrFile to fd 2...
I0823 11:48:43.822435    3095 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 11:48:43.822558    3095 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
I0823 11:48:43.822957    3095 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
I0823 11:48:43.823018    3095 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
I0823 11:48:43.823916    3095 ssh_runner.go:195] Run: systemctl --version
I0823 11:48:43.823927    3095 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/functional-840000/id_rsa Username:docker}
I0823 11:48:43.859883    3095 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-840000 image ls --format yaml --alsologtostderr:
- id: 397432849901d4b78b8fda5db7d50e074ac273977a4a78ce47ad069d4a15e091
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.0
size: "119000000"
- id: 46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.0
size: "116000000"
- id: 762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.0
size: "57800000"
- id: 940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.0
size: "68300000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-840000
size: "32900000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: afb881c2ee7e7b0365a22cd32f1ed4f3addb766f5a1f17f463e2bc014b2ab52f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-840000
size: "30"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-840000 image ls --format yaml --alsologtostderr:
I0823 11:48:41.640954    3085 out.go:296] Setting OutFile to fd 1 ...
I0823 11:48:41.641105    3085 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 11:48:41.641109    3085 out.go:309] Setting ErrFile to fd 2...
I0823 11:48:41.641111    3085 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 11:48:41.641232    3085 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
I0823 11:48:41.641662    3085 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
I0823 11:48:41.641726    3085 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
I0823 11:48:41.642552    3085 ssh_runner.go:195] Run: systemctl --version
I0823 11:48:41.642564    3085 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/functional-840000/id_rsa Username:docker}
I0823 11:48:41.677356    3085 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-840000 ssh pgrep buildkitd: exit status 1 (69.217959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image build -t localhost/my-image:functional-840000 testdata/build --alsologtostderr
E0823 11:48:43.253238    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
E0823 11:48:43.260001    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
E0823 11:48:43.272061    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
E0823 11:48:43.294088    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
E0823 11:48:43.336120    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
E0823 11:48:43.417850    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
E0823 11:48:43.579916    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-840000 image build -t localhost/my-image:functional-840000 testdata/build --alsologtostderr: (1.950474375s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-840000 image build -t localhost/my-image:functional-840000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in d7718cc3cefc
Removing intermediate container d7718cc3cefc
---> c6d63a301320
Step 3/3 : ADD content.txt /
---> 7109e93aef1d
Successfully built 7109e93aef1d
Successfully tagged localhost/my-image:functional-840000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-840000 image build -t localhost/my-image:functional-840000 testdata/build --alsologtostderr:
I0823 11:48:41.792637    3089 out.go:296] Setting OutFile to fd 1 ...
I0823 11:48:41.792861    3089 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 11:48:41.792866    3089 out.go:309] Setting ErrFile to fd 2...
I0823 11:48:41.792868    3089 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 11:48:41.792992    3089 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17086-1133/.minikube/bin
I0823 11:48:41.793404    3089 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
I0823 11:48:41.793794    3089 config.go:182] Loaded profile config "functional-840000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0
I0823 11:48:41.794655    3089 ssh_runner.go:195] Run: systemctl --version
I0823 11:48:41.794665    3089 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17086-1133/.minikube/machines/functional-840000/id_rsa Username:docker}
I0823 11:48:41.830805    3089 build_images.go:151] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.333387829.tar
I0823 11:48:41.830865    3089 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0823 11:48:41.834458    3089 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.333387829.tar
I0823 11:48:41.835972    3089 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.333387829.tar: stat -c "%s %y" /var/lib/minikube/build/build.333387829.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.333387829.tar': No such file or directory
I0823 11:48:41.835995    3089 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.333387829.tar --> /var/lib/minikube/build/build.333387829.tar (3072 bytes)
I0823 11:48:41.843846    3089 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.333387829
I0823 11:48:41.846870    3089 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.333387829 -xf /var/lib/minikube/build/build.333387829.tar
I0823 11:48:41.850551    3089 docker.go:339] Building image: /var/lib/minikube/build/build.333387829
I0823 11:48:41.850614    3089 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-840000 /var/lib/minikube/build/build.333387829
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0823 11:48:43.699048    3089 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-840000 /var/lib/minikube/build/build.333387829: (1.848445s)
I0823 11:48:43.699127    3089 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.333387829
I0823 11:48:43.702394    3089 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.333387829.tar
I0823 11:48:43.705153    3089 build_images.go:207] Built localhost/my-image:functional-840000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.333387829.tar
I0823 11:48:43.705170    3089 build_images.go:123] succeeded building to: functional-840000
I0823 11:48:43.705174    3089 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.878960917s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-840000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-840000 docker-env) && out/minikube-darwin-arm64 status -p functional-840000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-840000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-840000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-840000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-gxhsb" [8d60ff83-4af8-48f9-ad61-c3e7ae72b39a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-gxhsb" [8d60ff83-4af8-48f9-ad61-c3e7ae72b39a] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.013121416s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image load --daemon gcr.io/google-containers/addon-resizer:functional-840000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-840000 image load --daemon gcr.io/google-containers/addon-resizer:functional-840000 --alsologtostderr: (2.14550425s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image load --daemon gcr.io/google-containers/addon-resizer:functional-840000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-840000 image load --daemon gcr.io/google-containers/addon-resizer:functional-840000 --alsologtostderr: (1.5337065s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.879723083s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-840000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image load --daemon gcr.io/google-containers/addon-resizer:functional-840000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-840000 image load --daemon gcr.io/google-containers/addon-resizer:functional-840000 --alsologtostderr: (1.900907292s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image save gcr.io/google-containers/addon-resizer:functional-840000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image rm gcr.io/google-containers/addon-resizer:functional-840000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-840000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 image save --daemon gcr.io/google-containers/addon-resizer:functional-840000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-840000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-840000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-840000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1dad0178-ad8b-4c73-8f0e-734dba3bf9e8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1dad0178-ad8b-4c73-8f0e-734dba3bf9e8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.007766292s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 service list -o json
functional_test.go:1493: Took "96.355875ms" to run "out/minikube-darwin-arm64 -p functional-840000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:31906
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:31906
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-840000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.251.186 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-840000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "119.281667ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "33.801125ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "119.305375ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "33.593625ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-840000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2274531702/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1692816502663966000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2274531702/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1692816502663966000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2274531702/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1692816502663966000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2274531702/001/test-1692816502663966000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (69.367334ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 23 18:48 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 23 18:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 23 18:48 test-1692816502663966000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh cat /mount-9p/test-1692816502663966000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-840000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0a9c04a9-81b0-43c2-9b8a-ade96c7da883] Pending
helpers_test.go:344: "busybox-mount" [0a9c04a9-81b0-43c2-9b8a-ade96c7da883] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0a9c04a9-81b0-43c2-9b8a-ade96c7da883] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0a9c04a9-81b0-43c2-9b8a-ade96c7da883] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.006034458s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-840000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-840000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2274531702/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-840000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2220720469/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (71.46075ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-840000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2220720469/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-840000 ssh "sudo umount -f /mount-9p": exit status 1 (67.921375ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-840000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-840000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2220720469/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.86s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-840000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-840000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-840000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (28.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-056000 --driver=qemu2 
E0823 11:48:48.387902    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
E0823 11:48:53.510000    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
E0823 11:49:03.752073    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-056000 --driver=qemu2 : (28.592685959s)
--- PASS: TestImageBuild/serial/Setup (28.59s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-056000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-056000: (1.719654625s)
--- PASS: TestImageBuild/serial/NormalBuild (1.72s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-056000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-056000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (78.19s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-537000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
E0823 11:49:24.234046    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
E0823 11:50:05.195053    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-537000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m18.193649166s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (78.19s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.34s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-537000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-537000 addons enable ingress --alsologtostderr -v=5: (17.336420292s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.34s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-537000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.23s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-144000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-144000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (46.889087167s)
--- PASS: TestJSONOutput/start/Command (46.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.28s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-144000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.28s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.24s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-144000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.24s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-144000 --output=json --user=testUser
E0823 11:52:46.201332    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
E0823 11:52:46.207720    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
E0823 11:52:46.219876    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
E0823 11:52:46.241994    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
E0823 11:52:46.284140    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
E0823 11:52:46.366245    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
E0823 11:52:46.528378    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
E0823 11:52:46.850568    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
E0823 11:52:47.492891    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
E0823 11:52:48.775220    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-144000 --output=json --user=testUser: (9.079038542s)
--- PASS: TestJSONOutput/stop/Command (9.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.34s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-109000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-109000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.725875ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"85bf887c-d4ac-4efd-99b6-f242b4cbc0c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-109000] minikube v1.31.2 on Darwin 13.5.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ced5b8d-1233-4a17-92f7-f2f09b90bc44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17086"}}
	{"specversion":"1.0","id":"6e66dc2b-9b34-409c-b222-28ff1afc8a2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig"}}
	{"specversion":"1.0","id":"b1d34094-2bd2-4a08-9bab-be053312fdad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"48b1f0a4-a85a-4ded-9631-f7e28ab1e66a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8876eec3-d49a-495b-acef-2097637174d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube"}}
	{"specversion":"1.0","id":"cda8e07c-2bbf-48ff-a148-51df5c0e1d2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3ce4eabe-661d-433b-8376-197472730e59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-109000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-109000
--- PASS: TestErrorJSONOutput (0.34s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (63.94s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-712000 --driver=qemu2 
E0823 11:52:51.337340    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
E0823 11:52:56.459299    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
E0823 11:53:06.700390    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-712000 --driver=qemu2 : (29.631972041s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-714000 --driver=qemu2 
E0823 11:53:27.181891    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/functional-840000/client.crt: no such file or directory
E0823 11:53:43.243279    1554 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17086-1133/.minikube/profiles/addons-603000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-714000 --driver=qemu2 : (33.487028s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-712000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-714000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-714000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-714000
helpers_test.go:175: Cleaning up "first-712000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-712000
--- PASS: TestMinikubeProfile (63.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-831000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-831000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.710917ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-831000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17086
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17086-1133/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17086-1133/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-831000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-831000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (43.36775ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-831000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-831000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-831000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-831000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (43.2795ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-831000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-499000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-499000 -n old-k8s-version-499000: exit status 7 (27.8365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-499000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-513000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-513000 -n no-preload-513000: exit status 7 (28.771458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-513000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-598000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-598000 -n embed-certs-598000: exit status 7 (28.416041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-598000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-411000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-411000 -n default-k8s-diff-port-411000: exit status 7 (29.599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-411000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-087000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-087000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-087000 -n newest-cni-087000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-087000 -n newest-cni-087000: exit status 7 (30.078167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-087000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/255)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-840000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1323226976/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-840000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1323226976/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-840000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1323226976/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount1: exit status 1 (75.593667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount2: exit status 1 (65.490667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount2: exit status 1 (66.642125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount2: exit status 1 (67.418625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount2: exit status 1 (67.520625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount2: exit status 1 (77.753083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-840000 ssh "findmnt -T" /mount2: exit status 1 (67.123417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-840000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1323226976/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-840000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1323226976/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-840000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1323226976/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.37s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-953000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-953000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-953000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-953000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-953000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-953000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-953000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-953000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-953000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-953000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-953000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-953000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-953000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-953000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-953000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-953000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-953000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-953000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-953000"

                                                
                                                
----------------------- debugLogs end: cilium-953000 [took: 2.270116875s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-953000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-953000
--- SKIP: TestNetworkPlugins/group/cilium (2.50s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-174000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-174000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard