Test Report: none_Linux 18943

                    
                      a95fbdf9550db8c431fa5a4c330192118acd2cbf:2024-08-31:36027
                    
                

Test fail (1/168)

Order failed test Duration
33 TestAddons/parallel/Registry 71.96
x
+
TestAddons/parallel/Registry (71.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.176008ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:345: "registry-6fb4cdfc84-kvbfn" [d885b228-b6fd-46cb-8255-e4f053cab565] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003709165s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:345: "registry-proxy-zvqvj" [60f8afd4-c385-4f32-966b-848604e750b8] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003908906s
addons_test.go:342: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.096619447s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/08/31 22:18:13 [DEBUG] GET http://10.154.0.4:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:245: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:253: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:45795               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:06 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:08 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.33.1 | 31 Aug 24 22:08 UTC | 31 Aug 24 22:09 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:06:38
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:06:38.198038  125982 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:06:38.198183  125982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:38.198189  125982 out.go:358] Setting ErrFile to fd 2...
	I0831 22:06:38.198195  125982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:38.198640  125982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-115525/.minikube/bin
	I0831 22:06:38.199610  125982 out.go:352] Setting JSON to false
	I0831 22:06:38.200615  125982 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6538,"bootTime":1725135460,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:06:38.200681  125982 start.go:139] virtualization: kvm guest
	I0831 22:06:38.203558  125982 out.go:177] * minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0831 22:06:38.205464  125982 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/18943-115525/.minikube/cache/preloaded-tarball: no such file or directory
	I0831 22:06:38.205544  125982 notify.go:220] Checking for updates...
	I0831 22:06:38.205591  125982 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:06:38.207373  125982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:06:38.208888  125982 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-115525/kubeconfig
	I0831 22:06:38.210442  125982 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-115525/.minikube
	I0831 22:06:38.211818  125982 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:06:38.213243  125982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:06:38.214916  125982 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:06:38.226086  125982 out.go:177] * Using the none driver based on user configuration
	I0831 22:06:38.227362  125982 start.go:297] selected driver: none
	I0831 22:06:38.227387  125982 start.go:901] validating driver "none" against <nil>
	I0831 22:06:38.227402  125982 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:06:38.227438  125982 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0831 22:06:38.227777  125982 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0831 22:06:38.228366  125982 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:06:38.228611  125982 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:06:38.228679  125982 cni.go:84] Creating CNI manager for ""
	I0831 22:06:38.228697  125982 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 22:06:38.228708  125982 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 22:06:38.228752  125982 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:06:38.230443  125982 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0831 22:06:38.231979  125982 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/config.json ...
	I0831 22:06:38.232028  125982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/config.json: {Name:mk38970e9643bec80982a3b131f2acc9873dfdc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:38.232189  125982 start.go:360] acquireMachinesLock for minikube: {Name:mkc056fd7d8ffacbbb2c538a5230980b861303cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0831 22:06:38.232218  125982 start.go:364] duration metric: took 15.919µs to acquireMachinesLock for "minikube"
	I0831 22:06:38.232232  125982 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 22:06:38.232302  125982 start.go:125] createHost starting for "" (driver="none")
	I0831 22:06:38.233821  125982 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0831 22:06:38.235034  125982 exec_runner.go:51] Run: systemctl --version
	I0831 22:06:38.237869  125982 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0831 22:06:38.237909  125982 client.go:168] LocalClient.Create starting
	I0831 22:06:38.237970  125982 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-115525/.minikube/certs/ca.pem
	I0831 22:06:38.237998  125982 main.go:141] libmachine: Decoding PEM data...
	I0831 22:06:38.238013  125982 main.go:141] libmachine: Parsing certificate...
	I0831 22:06:38.238076  125982 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-115525/.minikube/certs/cert.pem
	I0831 22:06:38.238102  125982 main.go:141] libmachine: Decoding PEM data...
	I0831 22:06:38.238113  125982 main.go:141] libmachine: Parsing certificate...
	I0831 22:06:38.238428  125982 client.go:171] duration metric: took 509.417µs to LocalClient.Create
	I0831 22:06:38.238452  125982 start.go:167] duration metric: took 587.104µs to libmachine.API.Create "minikube"
	I0831 22:06:38.238458  125982 start.go:293] postStartSetup for "minikube" (driver="none")
	I0831 22:06:38.238492  125982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:06:38.238532  125982 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:06:38.247650  125982 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 22:06:38.247675  125982 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 22:06:38.247684  125982 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 22:06:38.249747  125982 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0831 22:06:38.251128  125982 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-115525/.minikube/addons for local assets ...
	I0831 22:06:38.251200  125982 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-115525/.minikube/files for local assets ...
	I0831 22:06:38.251224  125982 start.go:296] duration metric: took 12.757896ms for postStartSetup
	I0831 22:06:38.251860  125982 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/config.json ...
	I0831 22:06:38.252000  125982 start.go:128] duration metric: took 19.687122ms to createHost
	I0831 22:06:38.252019  125982 start.go:83] releasing machines lock for "minikube", held for 19.792587ms
	I0831 22:06:38.252445  125982 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 22:06:38.252554  125982 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0831 22:06:38.254615  125982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0831 22:06:38.254689  125982 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:06:38.263213  125982 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 22:06:38.263256  125982 start.go:495] detecting cgroup driver to use...
	I0831 22:06:38.263302  125982 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 22:06:38.263502  125982 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:06:38.285458  125982 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0831 22:06:38.295232  125982 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0831 22:06:38.305689  125982 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0831 22:06:38.305764  125982 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0831 22:06:38.316456  125982 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 22:06:38.326450  125982 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0831 22:06:38.337100  125982 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 22:06:38.346745  125982 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:06:38.356063  125982 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0831 22:06:38.366439  125982 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0831 22:06:38.376039  125982 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0831 22:06:38.386241  125982 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:06:38.394321  125982 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:06:38.402202  125982 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0831 22:06:38.660723  125982 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0831 22:06:38.781235  125982 start.go:495] detecting cgroup driver to use...
	I0831 22:06:38.781306  125982 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 22:06:38.781447  125982 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:06:38.804346  125982 exec_runner.go:51] Run: which cri-dockerd
	I0831 22:06:38.805512  125982 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0831 22:06:38.814521  125982 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0831 22:06:38.814554  125982 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0831 22:06:38.814592  125982 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0831 22:06:38.824255  125982 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0831 22:06:38.824502  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube901387065 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0831 22:06:38.834253  125982 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0831 22:06:39.065143  125982 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0831 22:06:39.298331  125982 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0831 22:06:39.298498  125982 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0831 22:06:39.298515  125982 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0831 22:06:39.298562  125982 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0831 22:06:39.309149  125982 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0831 22:06:39.309315  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1861325209 /etc/docker/daemon.json
	I0831 22:06:39.318244  125982 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0831 22:06:39.514989  125982 exec_runner.go:51] Run: sudo systemctl restart docker
	I0831 22:06:39.941952  125982 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0831 22:06:39.953628  125982 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0831 22:06:39.971302  125982 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 22:06:39.982683  125982 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0831 22:06:40.211504  125982 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0831 22:06:40.435736  125982 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0831 22:06:40.667940  125982 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0831 22:06:40.683981  125982 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 22:06:40.696327  125982 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0831 22:06:40.916689  125982 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0831 22:06:40.990539  125982 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0831 22:06:40.990634  125982 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0831 22:06:40.992165  125982 start.go:563] Will wait 60s for crictl version
	I0831 22:06:40.992235  125982 exec_runner.go:51] Run: which crictl
	I0831 22:06:40.993174  125982 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0831 22:06:41.022336  125982 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0831 22:06:41.022406  125982 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0831 22:06:41.043446  125982 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0831 22:06:41.069644  125982 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0831 22:06:41.069743  125982 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0831 22:06:41.072674  125982 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0831 22:06:41.073880  125982 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:06:41.074011  125982 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 22:06:41.074030  125982 kubeadm.go:934] updating node { 10.154.0.4 8443 v1.31.0 docker true true} ...
	I0831 22:06:41.074131  125982 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-9 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.154.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0831 22:06:41.074180  125982 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0831 22:06:41.122238  125982 cni.go:84] Creating CNI manager for ""
	I0831 22:06:41.122265  125982 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 22:06:41.122282  125982 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:06:41.122305  125982 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.154.0.4 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-9 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.154.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.154.0.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:06:41.122447  125982 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.154.0.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-9"
	  kubeletExtraArgs:
	    node-ip: 10.154.0.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.154.0.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:06:41.122511  125982 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:06:41.132161  125982 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0831 22:06:41.132223  125982 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0831 22:06:41.141837  125982 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0831 22:06:41.141862  125982 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0831 22:06:41.141913  125982 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:06:41.141923  125982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-115525/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0831 22:06:41.141965  125982 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0831 22:06:41.142019  125982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-115525/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0831 22:06:41.153546  125982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-115525/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0831 22:06:41.196289  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube259063626 /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0831 22:06:41.210973  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube439818643 /var/lib/minikube/binaries/v1.31.0/kubectl
	I0831 22:06:41.240106  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2372636827 /var/lib/minikube/binaries/v1.31.0/kubelet
	I0831 22:06:41.306010  125982 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 22:06:41.315357  125982 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0831 22:06:41.315385  125982 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0831 22:06:41.315424  125982 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0831 22:06:41.324815  125982 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0831 22:06:41.324977  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube10907753 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0831 22:06:41.333555  125982 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0831 22:06:41.333579  125982 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0831 22:06:41.333618  125982 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0831 22:06:41.341375  125982 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:06:41.341530  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4021921035 /lib/systemd/system/kubelet.service
	I0831 22:06:41.350082  125982 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0831 22:06:41.350237  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4144947013 /var/tmp/minikube/kubeadm.yaml.new
	I0831 22:06:41.359203  125982 exec_runner.go:51] Run: grep 10.154.0.4	control-plane.minikube.internal$ /etc/hosts
	I0831 22:06:41.360532  125982 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0831 22:06:41.599028  125982 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0831 22:06:41.614180  125982 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube for IP: 10.154.0.4
	I0831 22:06:41.614205  125982 certs.go:194] generating shared ca certs ...
	I0831 22:06:41.614225  125982 certs.go:226] acquiring lock for ca certs: {Name:mk42f4283ef9dc63e84715505c7d94c673417f98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:41.614377  125982 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-115525/.minikube/ca.key
	I0831 22:06:41.614417  125982 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-115525/.minikube/proxy-client-ca.key
	I0831 22:06:41.614426  125982 certs.go:256] generating profile certs ...
	I0831 22:06:41.614484  125982 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/client.key
	I0831 22:06:41.614498  125982 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/client.crt with IP's: []
	I0831 22:06:41.799662  125982 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/client.crt ...
	I0831 22:06:41.799697  125982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/client.crt: {Name:mk895ad4369377aba46aa8638f7a7204e9f23489 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:41.799843  125982 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/client.key ...
	I0831 22:06:41.799854  125982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/client.key: {Name:mk366a1a4c24c5ece720558d564010a3cfa4588c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:41.799919  125982 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/apiserver.key.1b9420d6
	I0831 22:06:41.799935  125982 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/apiserver.crt.1b9420d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.154.0.4]
	I0831 22:06:41.984534  125982 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/apiserver.crt.1b9420d6 ...
	I0831 22:06:41.984569  125982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/apiserver.crt.1b9420d6: {Name:mk61ddefc2defeda2698d8f3e68817b3ba6d07c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:41.984724  125982 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/apiserver.key.1b9420d6 ...
	I0831 22:06:41.984736  125982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/apiserver.key.1b9420d6: {Name:mk3317122c350fca361294ac539645f43c7ebe27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:41.984790  125982 certs.go:381] copying /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/apiserver.crt.1b9420d6 -> /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/apiserver.crt
	I0831 22:06:41.984889  125982 certs.go:385] copying /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/apiserver.key.1b9420d6 -> /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/apiserver.key
	I0831 22:06:41.984947  125982 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/proxy-client.key
	I0831 22:06:41.984966  125982 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0831 22:06:42.209268  125982 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/proxy-client.crt ...
	I0831 22:06:42.209305  125982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/proxy-client.crt: {Name:mk7f9f735ca668d8b830c4d3c2f6403af6e1cbaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:42.209448  125982 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/proxy-client.key ...
	I0831 22:06:42.209461  125982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/proxy-client.key: {Name:mk9cb2b9779624186373c6f31c9aacd5ef6fa889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:42.209616  125982 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-115525/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 22:06:42.209650  125982 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-115525/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:06:42.209717  125982 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-115525/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:06:42.209754  125982 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-115525/.minikube/certs/key.pem (1675 bytes)
	I0831 22:06:42.210403  125982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-115525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:06:42.210544  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1971397587 /var/lib/minikube/certs/ca.crt
	I0831 22:06:42.219444  125982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-115525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0831 22:06:42.219597  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1781720650 /var/lib/minikube/certs/ca.key
	I0831 22:06:42.228570  125982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-115525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:06:42.228753  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3647988105 /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 22:06:42.237684  125982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-115525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:06:42.237855  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1680987412 /var/lib/minikube/certs/proxy-client-ca.key
	I0831 22:06:42.246378  125982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0831 22:06:42.246569  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1619829464 /var/lib/minikube/certs/apiserver.crt
	I0831 22:06:42.254698  125982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0831 22:06:42.254860  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1095410257 /var/lib/minikube/certs/apiserver.key
	I0831 22:06:42.262828  125982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:06:42.263023  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3055603790 /var/lib/minikube/certs/proxy-client.crt
	I0831 22:06:42.271438  125982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-115525/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0831 22:06:42.271566  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3755819124 /var/lib/minikube/certs/proxy-client.key
	I0831 22:06:42.279744  125982 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0831 22:06:42.279769  125982 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:06:42.279803  125982 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:06:42.289697  125982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-115525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:06:42.289924  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1205923735 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:06:42.299525  125982 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:06:42.299697  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3647743317 /var/lib/minikube/kubeconfig
	I0831 22:06:42.309046  125982 exec_runner.go:51] Run: openssl version
	I0831 22:06:42.311980  125982 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:06:42.321640  125982 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:06:42.323147  125982 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Aug 31 22:06 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:06:42.323202  125982 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:06:42.326173  125982 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:06:42.335286  125982 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:06:42.336537  125982 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:06:42.336579  125982 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:06:42.336688  125982 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0831 22:06:42.353173  125982 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 22:06:42.361861  125982 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 22:06:42.370119  125982 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0831 22:06:42.391445  125982 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 22:06:42.400998  125982 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 22:06:42.401038  125982 kubeadm.go:157] found existing configuration files:
	
	I0831 22:06:42.401095  125982 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0831 22:06:42.410014  125982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 22:06:42.410073  125982 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 22:06:42.418294  125982 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0831 22:06:42.427409  125982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 22:06:42.427474  125982 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 22:06:42.435691  125982 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0831 22:06:42.444539  125982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 22:06:42.444609  125982 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 22:06:42.452590  125982 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0831 22:06:42.460802  125982 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 22:06:42.460867  125982 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 22:06:42.468733  125982 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0831 22:06:42.502860  125982 kubeadm.go:310] W0831 22:06:42.502731  126882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:06:42.503330  125982 kubeadm.go:310] W0831 22:06:42.503291  126882 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:06:42.505030  125982 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0831 22:06:42.505051  125982 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 22:06:42.605047  125982 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 22:06:42.605169  125982 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 22:06:42.605182  125982 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 22:06:42.605188  125982 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0831 22:06:42.616728  125982 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 22:06:42.619534  125982 out.go:235]   - Generating certificates and keys ...
	I0831 22:06:42.619576  125982 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 22:06:42.619591  125982 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 22:06:42.699666  125982 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0831 22:06:42.883455  125982 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0831 22:06:42.951022  125982 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0831 22:06:43.361228  125982 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0831 22:06:43.674107  125982 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0831 22:06:43.674141  125982 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
	I0831 22:06:43.850016  125982 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0831 22:06:43.850125  125982 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
	I0831 22:06:44.186472  125982 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0831 22:06:44.275946  125982 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0831 22:06:44.473760  125982 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0831 22:06:44.474012  125982 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 22:06:44.518287  125982 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 22:06:44.566137  125982 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0831 22:06:44.785315  125982 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 22:06:45.032638  125982 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 22:06:45.258389  125982 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 22:06:45.258993  125982 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 22:06:45.261527  125982 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 22:06:45.263893  125982 out.go:235]   - Booting up control plane ...
	I0831 22:06:45.263930  125982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 22:06:45.263960  125982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 22:06:45.264394  125982 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 22:06:45.292672  125982 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 22:06:45.297486  125982 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 22:06:45.297518  125982 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 22:06:45.519295  125982 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0831 22:06:45.519326  125982 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0831 22:06:46.020956  125982 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.634074ms
	I0831 22:06:46.021073  125982 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0831 22:06:50.523184  125982 kubeadm.go:310] [api-check] The API server is healthy after 4.502220826s
	I0831 22:06:50.534934  125982 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 22:06:50.547412  125982 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 22:06:50.568129  125982 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 22:06:50.568158  125982 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-9 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 22:06:50.578388  125982 kubeadm.go:310] [bootstrap-token] Using token: 6j5waj.tt9kt7a6rudme3ho
	I0831 22:06:50.579727  125982 out.go:235]   - Configuring RBAC rules ...
	I0831 22:06:50.579762  125982 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 22:06:50.584872  125982 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 22:06:50.592307  125982 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 22:06:50.596158  125982 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 22:06:50.600964  125982 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 22:06:50.604200  125982 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 22:06:50.929612  125982 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 22:06:51.355077  125982 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 22:06:51.929594  125982 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 22:06:51.930481  125982 kubeadm.go:310] 
	I0831 22:06:51.930503  125982 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 22:06:51.930507  125982 kubeadm.go:310] 
	I0831 22:06:51.930511  125982 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 22:06:51.930514  125982 kubeadm.go:310] 
	I0831 22:06:51.930518  125982 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 22:06:51.930522  125982 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 22:06:51.930526  125982 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 22:06:51.930530  125982 kubeadm.go:310] 
	I0831 22:06:51.930544  125982 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 22:06:51.930547  125982 kubeadm.go:310] 
	I0831 22:06:51.930551  125982 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 22:06:51.930554  125982 kubeadm.go:310] 
	I0831 22:06:51.930558  125982 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 22:06:51.930562  125982 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 22:06:51.930566  125982 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 22:06:51.930570  125982 kubeadm.go:310] 
	I0831 22:06:51.930575  125982 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 22:06:51.930579  125982 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 22:06:51.930582  125982 kubeadm.go:310] 
	I0831 22:06:51.930587  125982 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6j5waj.tt9kt7a6rudme3ho \
	I0831 22:06:51.930600  125982 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:16af509ed073f35629fc8b51edcad7a6ed32f703cad436d54d6086f698552fcf \
	I0831 22:06:51.930604  125982 kubeadm.go:310] 	--control-plane 
	I0831 22:06:51.930607  125982 kubeadm.go:310] 
	I0831 22:06:51.930610  125982 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 22:06:51.930612  125982 kubeadm.go:310] 
	I0831 22:06:51.930615  125982 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6j5waj.tt9kt7a6rudme3ho \
	I0831 22:06:51.930617  125982 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:16af509ed073f35629fc8b51edcad7a6ed32f703cad436d54d6086f698552fcf 
	I0831 22:06:51.933625  125982 cni.go:84] Creating CNI manager for ""
	I0831 22:06:51.933661  125982 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 22:06:51.935457  125982 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0831 22:06:51.936759  125982 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0831 22:06:51.948212  125982 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0831 22:06:51.948389  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2452824089 /etc/cni/net.d/1-k8s.conflist
	I0831 22:06:51.960277  125982 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 22:06:51.960348  125982 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:51.960385  125982 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-9 minikube.k8s.io/updated_at=2024_08_31T22_06_51_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0831 22:06:51.970360  125982 ops.go:34] apiserver oom_adj: -16
	I0831 22:06:52.034988  125982 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:52.535713  125982 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:53.035191  125982 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:53.534999  125982 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:54.035222  125982 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:54.535225  125982 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:55.035958  125982 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:55.535398  125982 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:56.035289  125982 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:56.536077  125982 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:56.614264  125982 kubeadm.go:1113] duration metric: took 4.65397359s to wait for elevateKubeSystemPrivileges
	I0831 22:06:56.614293  125982 kubeadm.go:394] duration metric: took 14.277719783s to StartCluster
	I0831 22:06:56.614320  125982 settings.go:142] acquiring lock: {Name:mk9185a3f10945ffa1adda88767165737b7fd0a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:56.614398  125982 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-115525/kubeconfig
	I0831 22:06:56.615265  125982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-115525/kubeconfig: {Name:mka57507382e00d3117dfcc5106e15254dc41102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:56.615527  125982 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0831 22:06:56.615616  125982 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0831 22:06:56.615718  125982 addons.go:69] Setting yakd=true in profile "minikube"
	I0831 22:06:56.615734  125982 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0831 22:06:56.615767  125982 addons.go:234] Setting addon yakd=true in "minikube"
	I0831 22:06:56.615779  125982 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0831 22:06:56.615780  125982 addons.go:69] Setting registry=true in profile "minikube"
	I0831 22:06:56.615805  125982 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0831 22:06:56.615814  125982 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0831 22:06:56.615730  125982 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0831 22:06:56.615818  125982 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:06:56.615827  125982 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0831 22:06:56.615828  125982 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0831 22:06:56.615840  125982 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0831 22:06:56.615842  125982 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0831 22:06:56.615843  125982 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0831 22:06:56.615853  125982 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0831 22:06:56.615855  125982 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0831 22:06:56.615862  125982 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0831 22:06:56.615865  125982 addons.go:69] Setting volcano=true in profile "minikube"
	I0831 22:06:56.615869  125982 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0831 22:06:56.615870  125982 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0831 22:06:56.615816  125982 host.go:66] Checking if "minikube" exists ...
	I0831 22:06:56.615882  125982 addons.go:234] Setting addon volcano=true in "minikube"
	I0831 22:06:56.615885  125982 host.go:66] Checking if "minikube" exists ...
	I0831 22:06:56.615888  125982 host.go:66] Checking if "minikube" exists ...
	I0831 22:06:56.615893  125982 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0831 22:06:56.615899  125982 host.go:66] Checking if "minikube" exists ...
	I0831 22:06:56.615930  125982 host.go:66] Checking if "minikube" exists ...
	I0831 22:06:56.615927  125982 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0831 22:06:56.615955  125982 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0831 22:06:56.615806  125982 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0831 22:06:56.615991  125982 host.go:66] Checking if "minikube" exists ...
	I0831 22:06:56.615843  125982 host.go:66] Checking if "minikube" exists ...
	I0831 22:06:56.616551  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.616576  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.616584  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.616602  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.616607  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.616616  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.616618  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.616626  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.616636  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.616638  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.616646  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.616645  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.616653  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.616654  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.616678  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.616682  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.616690  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.616799  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.616811  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.616837  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.615832  125982 mustload.go:65] Loading cluster: minikube
	I0831 22:06:56.615818  125982 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0831 22:06:56.617291  125982 out.go:177] * Configuring local host environment ...
	I0831 22:06:56.617296  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.617311  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.617325  125982 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:06:56.617338  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.617530  125982 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0831 22:06:56.617564  125982 host.go:66] Checking if "minikube" exists ...
	I0831 22:06:56.615857  125982 host.go:66] Checking if "minikube" exists ...
	I0831 22:06:56.617798  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.617814  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.617845  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.618232  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.618246  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.618274  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.618408  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.618423  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.618454  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.615873  125982 host.go:66] Checking if "minikube" exists ...
	W0831 22:06:56.620558  125982 out.go:270] * 
	W0831 22:06:56.620583  125982 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0831 22:06:56.620591  125982 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0831 22:06:56.620599  125982 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0831 22:06:56.620606  125982 out.go:270] * 
	W0831 22:06:56.620650  125982 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0831 22:06:56.620662  125982 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0831 22:06:56.620669  125982 out.go:270] * 
	W0831 22:06:56.620716  125982 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0831 22:06:56.620728  125982 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0831 22:06:56.620734  125982 out.go:270] * 
	W0831 22:06:56.620750  125982 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0831 22:06:56.620784  125982 start.go:235] Will wait 6m0s for node &{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 22:06:56.620980  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.621004  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.621038  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.616646  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.615809  125982 addons.go:234] Setting addon registry=true in "minikube"
	I0831 22:06:56.621483  125982 host.go:66] Checking if "minikube" exists ...
	I0831 22:06:56.622171  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.622231  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.622281  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.623752  125982 out.go:177] * Verifying Kubernetes components...
	I0831 22:06:56.617798  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.624096  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.624142  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.625802  125982 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0831 22:06:56.644868  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.647371  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.648120  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.649662  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.652551  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.654293  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.655756  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.660128  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.660204  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.660423  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.660700  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.661167  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.664025  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.664169  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.671433  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.671515  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.672392  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.672455  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.674929  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.675964  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.675992  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.678472  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.678528  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.679348  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.679406  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.682241  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.683601  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.683767  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.684109  125982 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0831 22:06:56.685336  125982 out.go:177]   - Using image docker.io/registry:2.8.3
	I0831 22:06:56.686264  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.686489  125982 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0831 22:06:56.686521  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0831 22:06:56.686675  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1284213563 /etc/kubernetes/addons/registry-rc.yaml
	I0831 22:06:56.693502  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.693539  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.693879  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.693899  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.694100  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.694150  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.706324  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.707983  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.709693  125982 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0831 22:06:56.710430  125982 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0831 22:06:56.711050  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.711155  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.711794  125982 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0831 22:06:56.711830  125982 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0831 22:06:56.711990  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4215561619 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0831 22:06:56.712140  125982 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0831 22:06:56.712168  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0831 22:06:56.712298  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3688509488 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0831 22:06:56.712823  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.712880  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.721996  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.722035  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.722610  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.722649  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.722683  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.722727  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.722910  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.722964  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.724042  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.724073  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.725284  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.725346  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.727667  125982 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0831 22:06:56.727698  125982 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0831 22:06:56.727826  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2863602340 /etc/kubernetes/addons/registry-svc.yaml
	I0831 22:06:56.728551  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.728630  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.729385  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.731374  125982 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0831 22:06:56.732839  125982 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:06:56.732880  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0831 22:06:56.733046  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4116534751 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:06:56.733910  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.735291  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.736204  125982 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 22:06:56.737150  125982 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0831 22:06:56.737652  125982 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0831 22:06:56.737689  125982 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0831 22:06:56.737902  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3902644660 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0831 22:06:56.739680  125982 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:06:56.739706  125982 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0831 22:06:56.739714  125982 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:06:56.739757  125982 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:06:56.740325  125982 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0831 22:06:56.740357  125982 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0831 22:06:56.740494  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1704796564 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0831 22:06:56.744436  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.744472  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.744712  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.744737  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.747075  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.747106  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.748851  125982 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0831 22:06:56.748917  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0831 22:06:56.749239  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1521196597 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0831 22:06:56.749745  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.749796  125982 host.go:66] Checking if "minikube" exists ...
	I0831 22:06:56.751773  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.751911  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.752312  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.752837  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.752870  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.753235  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.753361  125982 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:06:56.753749  125982 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0831 22:06:56.754492  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.754543  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.754711  125982 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0831 22:06:56.756325  125982 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0831 22:06:56.756360  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0831 22:06:56.756532  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube259000785 /etc/kubernetes/addons/deployment.yaml
	I0831 22:06:56.758444  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 22:06:56.758746  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2993207975 /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:06:56.759664  125982 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0831 22:06:56.759906  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.760614  125982 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0831 22:06:56.760653  125982 host.go:66] Checking if "minikube" exists ...
	I0831 22:06:56.760275  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.761056  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.761145  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.761187  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.763539  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.763567  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.766928  125982 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:06:56.766961  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0831 22:06:56.767929  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1625493780 /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:06:56.768144  125982 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0831 22:06:56.768167  125982 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0831 22:06:56.768298  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3822822224 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0831 22:06:56.768859  125982 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0831 22:06:56.770459  125982 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0831 22:06:56.770520  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.771489  125982 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0831 22:06:56.771521  125982 host.go:66] Checking if "minikube" exists ...
	I0831 22:06:56.772152  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:06:56.772163  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:06:56.772197  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:06:56.772404  125982 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0831 22:06:56.772421  125982 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0831 22:06:56.772520  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2149342983 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0831 22:06:56.774025  125982 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0831 22:06:56.774071  125982 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0831 22:06:56.774210  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3151462879 /etc/kubernetes/addons/yakd-ns.yaml
	I0831 22:06:56.775664  125982 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0831 22:06:56.775702  125982 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0831 22:06:56.775708  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0831 22:06:56.775731  125982 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0831 22:06:56.775887  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1643681951 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0831 22:06:56.776327  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1753736334 /etc/kubernetes/addons/volcano-deployment.yaml
	I0831 22:06:56.776857  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.776882  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.784373  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.785677  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.785709  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.786672  125982 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0831 22:06:56.788568  125982 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0831 22:06:56.788605  125982 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0831 22:06:56.788776  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3979802768 /etc/kubernetes/addons/yakd-sa.yaml
	I0831 22:06:56.793226  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.793451  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.795010  125982 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0831 22:06:56.795157  125982 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0831 22:06:56.796409  125982 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0831 22:06:56.796659  125982 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0831 22:06:56.796937  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3491667397 /etc/kubernetes/addons/ig-namespace.yaml
	I0831 22:06:56.797410  125982 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0831 22:06:56.800083  125982 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0831 22:06:56.802993  125982 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0831 22:06:56.805147  125982 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0831 22:06:56.808231  125982 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0831 22:06:56.808794  125982 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:06:56.808828  125982 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0831 22:06:56.809056  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube661630696 /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:06:56.810482  125982 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0831 22:06:56.811076  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:06:56.812553  125982 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0831 22:06:56.812643  125982 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0831 22:06:56.813816  125982 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0831 22:06:56.814331  125982 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:06:56.814636  125982 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:06:56.814828  125982 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0831 22:06:56.814866  125982 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0831 22:06:56.815003  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube104654747 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0831 22:06:56.822517  125982 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0831 22:06:56.822564  125982 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0831 22:06:56.822757  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3279965559 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0831 22:06:56.826064  125982 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0831 22:06:56.826105  125982 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0831 22:06:56.826248  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3110840138 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0831 22:06:56.826693  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.826763  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.830321  125982 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:06:56.830572  125982 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0831 22:06:56.830605  125982 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0831 22:06:56.830731  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1690044062 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0831 22:06:56.836388  125982 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0831 22:06:56.836420  125982 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0831 22:06:56.836527  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2979962703 /etc/kubernetes/addons/yakd-crb.yaml
	I0831 22:06:56.837540  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:06:56.837600  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:06:56.847172  125982 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0831 22:06:56.856349  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.856390  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.857507  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:06:56.857534  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:06:56.859508  125982 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0831 22:06:56.859547  125982 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0831 22:06:56.859731  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3678705221 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0831 22:06:56.871888  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.871990  125982 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 22:06:56.872009  125982 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0831 22:06:56.872018  125982 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0831 22:06:56.872068  125982 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0831 22:06:56.874039  125982 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0831 22:06:56.874079  125982 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0831 22:06:56.875023  125982 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0831 22:06:56.875057  125982 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0831 22:06:56.875201  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube472655379 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0831 22:06:56.876708  125982 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0831 22:06:56.876735  125982 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0831 22:06:56.876861  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3130254162 /etc/kubernetes/addons/ig-role.yaml
	I0831 22:06:56.877360  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:06:56.878638  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1786468580 /etc/kubernetes/addons/yakd-svc.yaml
	I0831 22:06:56.885091  125982 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0831 22:06:56.890418  125982 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 22:06:56.890795  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1067971303 /etc/kubernetes/addons/storageclass.yaml
	I0831 22:06:56.919079  125982 out.go:177]   - Using image docker.io/busybox:stable
	I0831 22:06:56.919511  125982 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:06:56.919545  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0831 22:06:56.920834  125982 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:06:56.920870  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0831 22:06:56.920937  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube101868642 /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:06:56.921040  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3762110268 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:06:56.921140  125982 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0831 22:06:56.921161  125982 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0831 22:06:56.921271  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4204117560 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0831 22:06:56.921464  125982 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0831 22:06:56.921488  125982 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0831 22:06:56.921626  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3147090480 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0831 22:06:56.936610  125982 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 22:06:56.950252  125982 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:06:56.952067  125982 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0831 22:06:56.952104  125982 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0831 22:06:56.952242  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube102415881 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0831 22:06:56.959377  125982 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0831 22:06:56.959419  125982 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0831 22:06:56.959581  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2911915334 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0831 22:06:56.980232  125982 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:06:57.001830  125982 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0831 22:06:57.001883  125982 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0831 22:06:57.002051  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1296201403 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0831 22:06:57.014869  125982 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0831 22:06:57.014914  125982 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0831 22:06:57.015073  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1922900052 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0831 22:06:57.026717  125982 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:06:57.026762  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0831 22:06:57.026937  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube981797001 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:06:57.055529  125982 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0831 22:06:57.055576  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0831 22:06:57.055747  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3910522344 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0831 22:06:57.081025  125982 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0831 22:06:57.100531  125982 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0831 22:06:57.100572  125982 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0831 22:06:57.100665  125982 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:06:57.100708  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2611669590 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0831 22:06:57.135019  125982 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-9" to be "Ready" ...
	I0831 22:06:57.138874  125982 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0831 22:06:57.138914  125982 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0831 22:06:57.139043  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2042970205 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0831 22:06:57.139895  125982 node_ready.go:49] node "ubuntu-20-agent-9" has status "Ready":"True"
	I0831 22:06:57.139921  125982 node_ready.go:38] duration metric: took 4.866672ms for node "ubuntu-20-agent-9" to be "Ready" ...
	I0831 22:06:57.139930  125982 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:06:57.152487  125982 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fplnn" in "kube-system" namespace to be "Ready" ...
	I0831 22:06:57.162292  125982 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0831 22:06:57.162331  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0831 22:06:57.162492  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3895875366 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0831 22:06:57.197943  125982 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0831 22:06:57.197997  125982 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0831 22:06:57.198319  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2193953292 /etc/kubernetes/addons/ig-crd.yaml
	I0831 22:06:57.221441  125982 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:06:57.221558  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0831 22:06:57.221808  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2026094415 /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:06:57.245500  125982 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0831 22:06:57.245542  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0831 22:06:57.245705  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2893096631 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0831 22:06:57.352394  125982 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:06:57.404867  125982 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:06:57.404923  125982 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0831 22:06:57.405064  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2959171064 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:06:57.527646  125982 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:06:57.539962  125982 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0831 22:06:57.845481  125982 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.031077784s)
	I0831 22:06:57.845521  125982 addons.go:475] Verifying addon registry=true in "minikube"
	I0831 22:06:57.851515  125982 out.go:177] * Verifying registry addon...
	I0831 22:06:57.854029  125982 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0831 22:06:57.862404  125982 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0831 22:06:57.862425  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:06:58.054936  125982 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0831 22:06:58.187668  125982 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.207324272s)
	I0831 22:06:58.267580  125982 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.437203344s)
	I0831 22:06:58.267620  125982 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0831 22:06:58.341271  125982 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.390956906s)
	I0831 22:06:58.343790  125982 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0831 22:06:58.354893  125982 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.540212708s)
	I0831 22:06:58.360924  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:06:58.666826  125982 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.314361114s)
	I0831 22:06:58.865291  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:06:58.969885  125982 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.869166565s)
	W0831 22:06:58.969928  125982 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:06:58.969957  125982 retry.go:31] will retry after 158.510875ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:06:59.128766  125982 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:06:59.166703  125982 pod_ready.go:103] pod "coredns-6f6b679f8f-fplnn" in "kube-system" namespace has status "Ready":"False"
	I0831 22:06:59.361057  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:06:59.867850  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:00.191658  125982 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.379059052s)
	I0831 22:07:00.368525  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:00.411278  125982 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.883561707s)
	I0831 22:07:00.411321  125982 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0831 22:07:00.413450  125982 out.go:177] * Verifying csi-hostpath-driver addon...
	I0831 22:07:00.416074  125982 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0831 22:07:00.435497  125982 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0831 22:07:00.435520  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:00.858677  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:00.922414  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:01.358501  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:01.460765  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:01.659224  125982 pod_ready.go:103] pod "coredns-6f6b679f8f-fplnn" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:01.857966  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:01.920367  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:02.158482  125982 pod_ready.go:93] pod "coredns-6f6b679f8f-fplnn" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:02.158507  125982 pod_ready.go:82] duration metric: took 5.005989735s for pod "coredns-6f6b679f8f-fplnn" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:02.158517  125982 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xm7sx" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:02.192003  125982 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.063172708s)
	I0831 22:07:02.357864  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:02.421029  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:02.858517  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:02.921911  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:03.358523  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:03.422458  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:03.779658  125982 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0831 22:07:03.779824  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube576223919 /var/lib/minikube/google_application_credentials.json
	I0831 22:07:03.794514  125982 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0831 22:07:03.794659  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1256385997 /var/lib/minikube/google_cloud_project
	I0831 22:07:03.812304  125982 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0831 22:07:03.812376  125982 host.go:66] Checking if "minikube" exists ...
	I0831 22:07:03.813101  125982 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0831 22:07:03.813133  125982 api_server.go:166] Checking apiserver status ...
	I0831 22:07:03.813182  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:07:03.837760  125982 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127300/cgroup
	I0831 22:07:03.855026  125982 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6"
	I0831 22:07:03.855109  125982 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/f7417fb0a27d58756d0dda71abe9bf12c8c962bd4341b57b39f20a154c1b8bb6/freezer.state
	I0831 22:07:03.858450  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:03.875032  125982 api_server.go:204] freezer state: "THAWED"
	I0831 22:07:03.875068  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:07:03.880293  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:07:03.880355  125982 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0831 22:07:03.884051  125982 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0831 22:07:03.885704  125982 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:03.887098  125982 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0831 22:07:03.887146  125982 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0831 22:07:03.887276  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1665100502 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0831 22:07:03.898506  125982 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0831 22:07:03.898552  125982 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0831 22:07:03.898698  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3639695894 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0831 22:07:03.915142  125982 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:07:03.915188  125982 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0831 22:07:03.915354  125982 exec_runner.go:51] Run: sudo cp -a /tmp/minikube905488788 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:07:03.921367  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:03.927444  125982 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:07:04.165477  125982 pod_ready.go:98] pod "coredns-6f6b679f8f-xm7sx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:07:04 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:06:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.154.0.4 HostIPs:[{IP:10.154.0.4}] P
odIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-31 22:06:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-31 22:06:57 +0000 UTC,FinishedAt:2024-08-31 22:07:03 +0000 UTC,ContainerID:docker://8a708e749fe617356efbf0fd1e796ea80e967fb2811a81c1162508a6aacedbd1,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://8a708e749fe617356efbf0fd1e796ea80e967fb2811a81c1162508a6aacedbd1 Started:0xc0006476a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00098a730} {Name:kube-api-access-79bv9 MountPath:/var/run/secrets/kubernetes.io/serviceaccount Rea
dOnly:true RecursiveReadOnly:0xc00098a740}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0831 22:07:04.165510  125982 pod_ready.go:82] duration metric: took 2.006985622s for pod "coredns-6f6b679f8f-xm7sx" in "kube-system" namespace to be "Ready" ...
	E0831 22:07:04.165526  125982 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-xm7sx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:07:04 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-31 22:06:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.154.0.
4 HostIPs:[{IP:10.154.0.4}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-31 22:06:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-31 22:06:57 +0000 UTC,FinishedAt:2024-08-31 22:07:03 +0000 UTC,ContainerID:docker://8a708e749fe617356efbf0fd1e796ea80e967fb2811a81c1162508a6aacedbd1,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://8a708e749fe617356efbf0fd1e796ea80e967fb2811a81c1162508a6aacedbd1 Started:0xc0006476a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00098a730} {Name:kube-api-access-79bv9 MountPath:/var/run/secrets/kub
ernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc00098a740}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0831 22:07:04.165536  125982 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:04.171005  125982 pod_ready.go:93] pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:04.171034  125982 pod_ready.go:82] duration metric: took 5.488923ms for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:04.171048  125982 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:04.359705  125982 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0831 22:07:04.360542  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:04.361664  125982 out.go:177] * Verifying gcp-auth addon...
	I0831 22:07:04.364444  125982 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0831 22:07:04.367375  125982 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 22:07:04.459365  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:04.677020  125982 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:04.677048  125982 pod_ready.go:82] duration metric: took 505.99075ms for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:04.677063  125982 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:04.682125  125982 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:04.682160  125982 pod_ready.go:82] duration metric: took 5.086015ms for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:04.682176  125982 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9xl8p" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:04.687427  125982 pod_ready.go:93] pod "kube-proxy-9xl8p" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:04.687453  125982 pod_ready.go:82] duration metric: took 5.269557ms for pod "kube-proxy-9xl8p" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:04.687466  125982 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:04.859055  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:04.922035  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:04.963084  125982 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:04.963113  125982 pod_ready.go:82] duration metric: took 275.638092ms for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:04.963126  125982 pod_ready.go:39] duration metric: took 7.823184659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:07:04.963151  125982 api_server.go:52] waiting for apiserver process to appear ...
	I0831 22:07:04.963222  125982 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:07:04.983341  125982 api_server.go:72] duration metric: took 8.362488227s to wait for apiserver process to appear ...
	I0831 22:07:04.983375  125982 api_server.go:88] waiting for apiserver healthz status ...
	I0831 22:07:04.983400  125982 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0831 22:07:04.988905  125982 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0831 22:07:04.989993  125982 api_server.go:141] control plane version: v1.31.0
	I0831 22:07:04.990111  125982 api_server.go:131] duration metric: took 6.720001ms to wait for apiserver health ...
	I0831 22:07:04.990128  125982 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 22:07:05.169397  125982 system_pods.go:59] 17 kube-system pods found
	I0831 22:07:05.169431  125982 system_pods.go:61] "coredns-6f6b679f8f-fplnn" [d3596cc7-6ed5-4b11-a00d-f3109adb41c1] Running
	I0831 22:07:05.169440  125982 system_pods.go:61] "csi-hostpath-attacher-0" [0f631f15-f3da-4072-80a9-5e64fe798418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0831 22:07:05.169446  125982 system_pods.go:61] "csi-hostpath-resizer-0" [c1ed4178-cd45-4471-8a58-a8fe6a957063] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0831 22:07:05.169458  125982 system_pods.go:61] "csi-hostpathplugin-rznqc" [e106032c-71bc-4cf3-9fcf-1268762838e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0831 22:07:05.169463  125982 system_pods.go:61] "etcd-ubuntu-20-agent-9" [ce24bd55-8479-4b8e-852e-9bf4c9c0977c] Running
	I0831 22:07:05.169467  125982 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-9" [c890e209-1e3d-42ad-896e-0253b7e74290] Running
	I0831 22:07:05.169471  125982 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-9" [ff334347-82f7-4b93-af86-0d3dc0135f66] Running
	I0831 22:07:05.169474  125982 system_pods.go:61] "kube-proxy-9xl8p" [a41fcda4-4d26-4677-8984-1b664b3be973] Running
	I0831 22:07:05.169477  125982 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-9" [2fcfdd22-4a91-4333-aff9-3c0e3c6b3ee0] Running
	I0831 22:07:05.169506  125982 system_pods.go:61] "metrics-server-84c5f94fbc-q2plw" [b4475d05-68d0-4661-ad2a-9dfc22b3badf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0831 22:07:05.169523  125982 system_pods.go:61] "nvidia-device-plugin-daemonset-t2ghz" [7baf585d-317d-48b6-b82a-cb92cb62c9a2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0831 22:07:05.169530  125982 system_pods.go:61] "registry-6fb4cdfc84-kvbfn" [d885b228-b6fd-46cb-8255-e4f053cab565] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0831 22:07:05.169535  125982 system_pods.go:61] "registry-proxy-zvqvj" [60f8afd4-c385-4f32-966b-848604e750b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0831 22:07:05.169541  125982 system_pods.go:61] "snapshot-controller-56fcc65765-knw7h" [53fb4f17-9dce-4f50-bf26-34988f290c5f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:07:05.169546  125982 system_pods.go:61] "snapshot-controller-56fcc65765-s2889" [87c33f70-7d29-4964-848f-34f917191f07] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:07:05.169549  125982 system_pods.go:61] "storage-provisioner" [0fee6f36-080d-497e-889f-ee3a78785da3] Running
	I0831 22:07:05.169555  125982 system_pods.go:61] "tiller-deploy-b48cc5f79-tb4n6" [8f4f84ee-4dff-427a-b8a8-718a8e75f67c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0831 22:07:05.169563  125982 system_pods.go:74] duration metric: took 179.425165ms to wait for pod list to return data ...
	I0831 22:07:05.169574  125982 default_sa.go:34] waiting for default service account to be created ...
	I0831 22:07:05.358514  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:05.361288  125982 default_sa.go:45] found service account: "default"
	I0831 22:07:05.361320  125982 default_sa.go:55] duration metric: took 191.737759ms for default service account to be created ...
	I0831 22:07:05.361335  125982 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 22:07:05.460177  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:05.570575  125982 system_pods.go:86] 17 kube-system pods found
	I0831 22:07:05.570610  125982 system_pods.go:89] "coredns-6f6b679f8f-fplnn" [d3596cc7-6ed5-4b11-a00d-f3109adb41c1] Running
	I0831 22:07:05.570621  125982 system_pods.go:89] "csi-hostpath-attacher-0" [0f631f15-f3da-4072-80a9-5e64fe798418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0831 22:07:05.570628  125982 system_pods.go:89] "csi-hostpath-resizer-0" [c1ed4178-cd45-4471-8a58-a8fe6a957063] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0831 22:07:05.570637  125982 system_pods.go:89] "csi-hostpathplugin-rznqc" [e106032c-71bc-4cf3-9fcf-1268762838e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0831 22:07:05.570642  125982 system_pods.go:89] "etcd-ubuntu-20-agent-9" [ce24bd55-8479-4b8e-852e-9bf4c9c0977c] Running
	I0831 22:07:05.570647  125982 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [c890e209-1e3d-42ad-896e-0253b7e74290] Running
	I0831 22:07:05.570654  125982 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [ff334347-82f7-4b93-af86-0d3dc0135f66] Running
	I0831 22:07:05.570660  125982 system_pods.go:89] "kube-proxy-9xl8p" [a41fcda4-4d26-4677-8984-1b664b3be973] Running
	I0831 22:07:05.570665  125982 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [2fcfdd22-4a91-4333-aff9-3c0e3c6b3ee0] Running
	I0831 22:07:05.570674  125982 system_pods.go:89] "metrics-server-84c5f94fbc-q2plw" [b4475d05-68d0-4661-ad2a-9dfc22b3badf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0831 22:07:05.570687  125982 system_pods.go:89] "nvidia-device-plugin-daemonset-t2ghz" [7baf585d-317d-48b6-b82a-cb92cb62c9a2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0831 22:07:05.570699  125982 system_pods.go:89] "registry-6fb4cdfc84-kvbfn" [d885b228-b6fd-46cb-8255-e4f053cab565] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0831 22:07:05.570711  125982 system_pods.go:89] "registry-proxy-zvqvj" [60f8afd4-c385-4f32-966b-848604e750b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0831 22:07:05.570722  125982 system_pods.go:89] "snapshot-controller-56fcc65765-knw7h" [53fb4f17-9dce-4f50-bf26-34988f290c5f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:07:05.570737  125982 system_pods.go:89] "snapshot-controller-56fcc65765-s2889" [87c33f70-7d29-4964-848f-34f917191f07] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:07:05.570743  125982 system_pods.go:89] "storage-provisioner" [0fee6f36-080d-497e-889f-ee3a78785da3] Running
	I0831 22:07:05.570752  125982 system_pods.go:89] "tiller-deploy-b48cc5f79-tb4n6" [8f4f84ee-4dff-427a-b8a8-718a8e75f67c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0831 22:07:05.570769  125982 system_pods.go:126] duration metric: took 209.426546ms to wait for k8s-apps to be running ...
	I0831 22:07:05.570778  125982 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 22:07:05.570841  125982 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:07:05.584794  125982 system_svc.go:56] duration metric: took 14.0017ms WaitForService to wait for kubelet
	I0831 22:07:05.584830  125982 kubeadm.go:582] duration metric: took 8.963984269s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:07:05.584875  125982 node_conditions.go:102] verifying NodePressure condition ...
	I0831 22:07:05.763043  125982 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0831 22:07:05.763079  125982 node_conditions.go:123] node cpu capacity is 8
	I0831 22:07:05.763094  125982 node_conditions.go:105] duration metric: took 178.213426ms to run NodePressure ...
	I0831 22:07:05.763108  125982 start.go:241] waiting for startup goroutines ...
	I0831 22:07:05.858844  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:05.921078  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:06.370061  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:06.420320  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:06.858928  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:06.922370  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:07.358126  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:07.460076  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:07.859058  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:07.960867  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:08.358115  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:08.459184  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:08.857968  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:08.920960  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:09.358253  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:09.420625  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:09.858680  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:09.920683  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:10.358901  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:10.421670  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:10.858937  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:10.921190  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:11.358124  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:11.469941  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:11.858951  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:11.921423  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:12.360202  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:12.421909  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:12.858260  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:12.921247  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:13.358653  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:13.460560  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:13.858577  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:13.921397  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:14.358543  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:14.453092  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:14.858283  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:14.920998  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:15.358519  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:15.420675  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:15.858240  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:15.920410  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:16.359218  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:16.422179  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:16.858771  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:16.921462  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:17.358040  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:17.421970  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:17.858698  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:17.921618  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:18.358118  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:18.421003  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:18.858064  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:18.920531  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:19.358181  125982 kapi.go:107] duration metric: took 21.504150749s to wait for kubernetes.io/minikube-addons=registry ...
	I0831 22:07:19.460060  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:19.920744  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:20.421313  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:20.919957  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:21.421435  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:21.971400  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:22.421304  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:22.920713  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:23.421275  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:23.921676  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:24.421008  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:24.921299  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:25.421596  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:25.922199  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:26.420954  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:26.921614  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:27.420693  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:27.970028  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:28.420273  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:28.921361  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:29.421003  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:29.971204  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:30.474472  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:30.921689  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:31.421326  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:31.921406  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:32.420676  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:32.920894  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:33.421015  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:33.922350  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:34.471611  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:34.921474  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:35.470462  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:35.921158  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:36.471255  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:36.921467  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:37.470209  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:37.920420  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:38.421546  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:38.920986  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:39.421303  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:39.921433  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:40.470763  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:40.970857  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:41.420832  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:41.971091  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:42.421344  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:42.927466  125982 kapi.go:107] duration metric: took 42.511389433s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0831 22:07:45.869050  125982 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 22:07:45.869080  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:46.368442  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:46.867561  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:47.368045  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:47.868750  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:48.368453  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:48.868470  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:49.368174  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:49.868330  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:50.367777  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:50.868462  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:51.367656  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:51.867876  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:52.368121  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:52.868631  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:53.368389  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:53.867829  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:54.368616  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:54.867712  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:55.367988  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:55.868279  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:56.369033  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:56.868465  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:57.367483  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:57.868246  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:58.368162  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:58.869039  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:59.368107  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:07:59.868711  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:00.368264  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:00.868874  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:01.368021  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:01.868396  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:02.367809  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:02.868192  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:03.368166  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:03.868437  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:04.367938  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:04.868225  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:05.368334  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:05.868027  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:06.368635  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:06.868125  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:07.368126  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:07.869036  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:08.368824  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:08.868615  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:09.368494  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:09.868362  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:10.368020  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:10.868668  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:11.367908  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:11.868241  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:12.369150  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:12.867870  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:13.368377  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:13.867638  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:14.368201  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:14.868479  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:15.367665  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:15.868246  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:16.368740  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:16.868220  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:17.368353  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:17.868253  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:18.368112  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:18.868430  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:19.368153  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:19.869113  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:20.368949  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:20.869079  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:21.368617  125982 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:21.868241  125982 kapi.go:107] duration metric: took 1m17.5037986s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0831 22:08:21.869976  125982 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0831 22:08:21.871438  125982 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0831 22:08:21.873115  125982 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0831 22:08:21.874601  125982 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, helm-tiller, metrics-server, storage-provisioner-rancher, yakd, storage-provisioner, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0831 22:08:21.876171  125982 addons.go:510] duration metric: took 1m25.260556763s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass helm-tiller metrics-server storage-provisioner-rancher yakd storage-provisioner inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0831 22:08:21.876221  125982 start.go:246] waiting for cluster config update ...
	I0831 22:08:21.876247  125982 start.go:255] writing updated cluster config ...
	I0831 22:08:21.876531  125982 exec_runner.go:51] Run: rm -f paused
	I0831 22:08:21.927550  125982 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0831 22:08:21.929832  125982 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Sun 2024-08-25 03:11:38 UTC, end at Sat 2024-08-31 22:18:14 UTC. --
	Aug 31 22:09:41 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:09:41.532558393Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Aug 31 22:09:41 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:09:41.534887611Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Aug 31 22:10:32 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:10:32.535624329Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Aug 31 22:10:32 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:10:32.538266550Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Aug 31 22:10:40 ubuntu-20-agent-9 cri-dockerd[126545]: time="2024-08-31T22:10:40Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc"
	Aug 31 22:10:41 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:10:41.874078096Z" level=info msg="ignoring event" container=3fb8dddd398785a8f0a55b168e0d0b3dec7fa52ad6042eeee8488f4c33b25bbf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:12:05 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:12:05.551950827Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Aug 31 22:12:05 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:12:05.557213227Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Aug 31 22:13:25 ubuntu-20-agent-9 cri-dockerd[126545]: time="2024-08-31T22:13:25Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc"
	Aug 31 22:13:26 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:13:26.869711293Z" level=info msg="ignoring event" container=a732ef3abe0661aac6d815d616e5aaf71bd67ba96e5c183e33785376004a719e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:14:56 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:14:56.530542912Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Aug 31 22:14:56 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:14:56.532599729Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Aug 31 22:17:14 ubuntu-20-agent-9 cri-dockerd[126545]: time="2024-08-31T22:17:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5d68849fdb005eced46b2027df693d727391da16af5eab0f2656eb01ae489e4c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Aug 31 22:17:14 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:17:14.302237842Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 31 22:17:14 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:17:14.304344666Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 31 22:17:28 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:17:28.531797505Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 31 22:17:28 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:17:28.534092412Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 31 22:17:51 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:17:51.537101645Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 31 22:17:51 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:17:51.539280424Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 31 22:18:13 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:18:13.775600005Z" level=info msg="ignoring event" container=5d68849fdb005eced46b2027df693d727391da16af5eab0f2656eb01ae489e4c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:18:14 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:18:14.046455467Z" level=info msg="ignoring event" container=28f3ee6ad526cee7ce4f154e02b9de8bcdee70c026b084aa6c2a7415ea379013 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:18:14 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:18:14.096755349Z" level=info msg="ignoring event" container=7e7a32d652c7f85a27394a820edbd7ec82ae2acbcb0ef6e56da9ce8407e2db4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:18:14 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:18:14.184275415Z" level=info msg="ignoring event" container=f79f9fc8f6b2ccd5d672354909bc07d216494598dfadace543a517b5345c2ecf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:18:14 ubuntu-20-agent-9 cri-dockerd[126545]: time="2024-08-31T22:18:14Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-zvqvj_kube-system\": unexpected command output nsenter: cannot open /proc/128854/ns/net: No such file or directory\n with error: exit status 1"
	Aug 31 22:18:14 ubuntu-20-agent-9 dockerd[126216]: time="2024-08-31T22:18:14.281150256Z" level=info msg="ignoring event" container=9dc7f1cbf411b940e8f8c7de7f97fdd305432c860c067e8169aec8bef9c1bab9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	a732ef3abe066       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc                            4 minutes ago       Exited              gadget                                   6                   ec496bdfa03ba       gadget-wxf5m
	833799de21a92       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   c8e2d2224052c       gcp-auth-89d5ffd79-kcr8p
	6936daaac9419       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   480615e74cb4c       csi-hostpathplugin-rznqc
	241831cc688c2       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   480615e74cb4c       csi-hostpathplugin-rznqc
	cb1c213ba845e       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   480615e74cb4c       csi-hostpathplugin-rznqc
	e92204b375cb0       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   480615e74cb4c       csi-hostpathplugin-rznqc
	dc52a99a25dc5       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   480615e74cb4c       csi-hostpathplugin-rznqc
	837c90c3ea536       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   5396ed15b14e7       csi-hostpath-resizer-0
	9d776625d828b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   480615e74cb4c       csi-hostpathplugin-rznqc
	4ca16ebf3d903       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   9cabf99a0fb25       csi-hostpath-attacher-0
	161cf8c55fdcf       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   0a5c46ae23587       snapshot-controller-56fcc65765-knw7h
	8a455e3ca97ab       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   717b18ede8eee       snapshot-controller-56fcc65765-s2889
	f3018451b518f       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   f936a859035ac       yakd-dashboard-67d98fc6b-9ppm2
	f6fb49ceb95e4       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   464b7f7d5957e       local-path-provisioner-86d989889c-zcmgm
	7e7a32d652c7f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              10 minutes ago      Exited              registry-proxy                           0                   9dc7f1cbf411b       registry-proxy-zvqvj
	abf2be77096ff       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   a96b5759a99e9       metrics-server-84c5f94fbc-q2plw
	e382ce949a6e2       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               11 minutes ago      Running             cloud-spanner-emulator                   0                   1ad3d0c16b4d4       cloud-spanner-emulator-769b77f747-h7b8f
	b4865b2c6f002       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  11 minutes ago      Running             tiller                                   0                   2ef189aeda24c       tiller-deploy-b48cc5f79-tb4n6
	09413e974739b       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   ba4d153d90ecb       nvidia-device-plugin-daemonset-t2ghz
	ac698f11498f3       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   7dac2ad28023f       storage-provisioner
	1ca6864a04c89       cbb01a7bd410d                                                                                                                                11 minutes ago      Running             coredns                                  0                   7afb6da7212b1       coredns-6f6b679f8f-fplnn
	4ba9004253e19       ad83b2ca7b09e                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   10699c6c76432       kube-proxy-9xl8p
	17ce343a0cd55       045733566833c                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   eb1f072d94532       kube-controller-manager-ubuntu-20-agent-9
	c93bec0e9f606       1766f54c897f0                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   252fbd64d82ac       kube-scheduler-ubuntu-20-agent-9
	aafbcddeaa3eb       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   73e59957ba0de       etcd-ubuntu-20-agent-9
	f7417fb0a27d5       604f5db92eaa8                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   647d7512bf334       kube-apiserver-ubuntu-20-agent-9
	
	
	==> coredns [1ca6864a04c8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:33184 - 30649 "HINFO IN 8621122144610639524.6662797623426096613. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008193494s
	[INFO] 10.244.0.24:46183 - 45567 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000339889s
	[INFO] 10.244.0.24:58868 - 64354 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000143995s
	[INFO] 10.244.0.24:35001 - 9036 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136923s
	[INFO] 10.244.0.24:56222 - 243 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000216968s
	[INFO] 10.244.0.24:35148 - 32664 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108026s
	[INFO] 10.244.0.24:40799 - 65247 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116958s
	[INFO] 10.244.0.24:46763 - 28783 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004034888s
	[INFO] 10.244.0.24:45076 - 61076 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004498019s
	[INFO] 10.244.0.24:54376 - 41072 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003609506s
	[INFO] 10.244.0.24:53655 - 20386 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003725827s
	[INFO] 10.244.0.24:60588 - 13908 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002450517s
	[INFO] 10.244.0.24:38838 - 27706 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002503317s
	[INFO] 10.244.0.24:37081 - 20203 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002369179s
	[INFO] 10.244.0.24:35537 - 63328 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002730594s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-9
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-9
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_06_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-9
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-9"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:06:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-9
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:18:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:14:01 +0000   Sat, 31 Aug 2024 22:06:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:14:01 +0000   Sat, 31 Aug 2024 22:06:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:14:01 +0000   Sat, 31 Aug 2024 22:06:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:14:01 +0000   Sat, 31 Aug 2024 22:06:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.154.0.4
	  Hostname:    ubuntu-20-agent-9
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                4894487b-7b30-e033-3a9d-c6f45b6c4cf8
	  Boot ID:                    8c90c110-8fe6-4c50-bf4b-8a85308bbb22
	  Kernel Version:             5.15.0-1067-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-769b77f747-h7b8f      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-wxf5m                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-kcr8p                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-fplnn                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-rznqc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-9                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-9             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-9    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9xl8p                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-9             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-q2plw              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-t2ghz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-knw7h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-s2889         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 tiller-deploy-b48cc5f79-tb4n6                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-zcmgm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-9ppm2               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ubuntu-20-agent-9 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-9 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-9 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node ubuntu-20-agent-9 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node ubuntu-20-agent-9 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node ubuntu-20-agent-9 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ubuntu-20-agent-9 event: Registered Node ubuntu-20-agent-9 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 23 3c 95 c3 39 08 06
	[  +7.016261] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae d8 a9 80 15 13 08 06
	[  +0.040209] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e f3 6c cd 54 ad 08 06
	[  +2.789620] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a e2 61 15 01 ae 08 06
	[  +1.816364] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be ac fe c1 e1 a2 08 06
	[  +2.147418] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 b5 9e 9b c0 f4 08 06
	[  +5.928777] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff de 3b 25 06 10 c1 08 06
	[  +0.117628] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a e4 1f 18 1f 20 08 06
	[  +0.111649] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 31 f6 fa 7e e6 08 06
	[Aug31 22:08] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 4f 7c f9 be ec 08 06
	[  +0.031920] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fe 17 a9 df 5c 23 08 06
	[ +12.107130] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff f2 7d bd cd ee 27 08 06
	[  +0.000515] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 66 a7 3b 3a 03 9c 08 06
	
	
	==> etcd [aafbcddeaa3e] <==
	{"level":"info","ts":"2024-08-31T22:06:47.243774Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"82d4d36e40f9b4a","initial-advertise-peer-urls":["https://10.154.0.4:2380"],"listen-peer-urls":["https://10.154.0.4:2380"],"advertise-client-urls":["https://10.154.0.4:2379"],"listen-client-urls":["https://10.154.0.4:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-31T22:06:47.243805Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-31T22:06:47.531160Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-31T22:06:47.531205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-31T22:06:47.531239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a received MsgPreVoteResp from 82d4d36e40f9b4a at term 1"}
	{"level":"info","ts":"2024-08-31T22:06:47.531254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became candidate at term 2"}
	{"level":"info","ts":"2024-08-31T22:06:47.531260Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a received MsgVoteResp from 82d4d36e40f9b4a at term 2"}
	{"level":"info","ts":"2024-08-31T22:06:47.531269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became leader at term 2"}
	{"level":"info","ts":"2024-08-31T22:06:47.531276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 82d4d36e40f9b4a elected leader 82d4d36e40f9b4a at term 2"}
	{"level":"info","ts":"2024-08-31T22:06:47.532264Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:06:47.532966Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:06:47.532963Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"82d4d36e40f9b4a","local-member-attributes":"{Name:ubuntu-20-agent-9 ClientURLs:[https://10.154.0.4:2379]}","request-path":"/0/members/82d4d36e40f9b4a/attributes","cluster-id":"7cf21852ad6c12ab","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-31T22:06:47.532996Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:06:47.533229Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-31T22:06:47.533405Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-31T22:06:47.533333Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7cf21852ad6c12ab","local-member-id":"82d4d36e40f9b4a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:06:47.533546Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:06:47.533579Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:06:47.534165Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T22:06:47.534328Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T22:06:47.535072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-31T22:06:47.535596Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.154.0.4:2379"}
	{"level":"info","ts":"2024-08-31T22:16:47.726659Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1762}
	{"level":"info","ts":"2024-08-31T22:16:47.749720Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1762,"took":"22.513538ms","hash":3750733341,"current-db-size-bytes":8519680,"current-db-size":"8.5 MB","current-db-size-in-use-bytes":4579328,"current-db-size-in-use":"4.6 MB"}
	{"level":"info","ts":"2024-08-31T22:16:47.749798Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3750733341,"revision":1762,"compact-revision":-1}
	
	
	==> gcp-auth [833799de21a9] <==
	2024/08/31 22:08:21 GCP Auth Webhook started!
	2024/08/31 22:08:38 Ready to marshal response ...
	2024/08/31 22:08:38 Ready to write response ...
	2024/08/31 22:08:38 Ready to marshal response ...
	2024/08/31 22:08:38 Ready to write response ...
	2024/08/31 22:09:01 Ready to marshal response ...
	2024/08/31 22:09:01 Ready to write response ...
	2024/08/31 22:09:01 Ready to marshal response ...
	2024/08/31 22:09:01 Ready to write response ...
	2024/08/31 22:09:01 Ready to marshal response ...
	2024/08/31 22:09:01 Ready to write response ...
	2024/08/31 22:17:13 Ready to marshal response ...
	2024/08/31 22:17:13 Ready to write response ...
	
	
	==> kernel <==
	 22:18:14 up  2:00,  0 users,  load average: 0.40, 0.36, 0.35
	Linux ubuntu-20-agent-9 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [f7417fb0a27d] <==
	W0831 22:07:44.198731       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.117.197:443: connect: connection refused
	W0831 22:07:45.358069       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.127.235:443: connect: connection refused
	E0831 22:07:45.358113       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.127.235:443: connect: connection refused" logger="UnhandledError"
	W0831 22:08:07.380814       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.127.235:443: connect: connection refused
	E0831 22:08:07.380849       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.127.235:443: connect: connection refused" logger="UnhandledError"
	W0831 22:08:07.396567       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.127.235:443: connect: connection refused
	E0831 22:08:07.396609       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.127.235:443: connect: connection refused" logger="UnhandledError"
	I0831 22:08:38.187285       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0831 22:08:38.206768       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0831 22:08:51.657886       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0831 22:08:51.670326       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0831 22:08:51.775246       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0831 22:08:51.776402       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0831 22:08:51.800441       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0831 22:08:51.862804       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0831 22:08:51.974014       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0831 22:08:51.978412       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0831 22:08:52.059760       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0831 22:08:52.701628       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0831 22:08:52.863820       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0831 22:08:52.863838       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0831 22:08:52.929108       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0831 22:08:53.041134       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0831 22:08:53.060733       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0831 22:08:53.260366       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [17ce343a0cd5] <==
	W0831 22:16:56.797196       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:16:56.797243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:04.532919       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:04.532969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:08.321721       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:08.321802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:24.808804       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:24.808855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:25.660404       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:25.660450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:29.503573       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:29.503618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:37.508482       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:37.508532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:41.735767       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:41.735817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:43.569474       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:43.569523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:17:55.005360       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:17:55.005412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:18:04.154097       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:18:04.154149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:18:09.272637       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:18:09.272679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:18:14.006691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="10.465µs"
	
	
	==> kube-proxy [4ba9004253e1] <==
	I0831 22:06:57.936258       1 server_linux.go:66] "Using iptables proxy"
	I0831 22:06:58.150263       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.154.0.4"]
	E0831 22:06:58.156100       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:06:58.329422       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0831 22:06:58.329507       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:06:58.337722       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:06:58.338201       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:06:58.338233       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:06:58.341192       1 config.go:197] "Starting service config controller"
	I0831 22:06:58.341210       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:06:58.341233       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:06:58.341238       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:06:58.341803       1 config.go:326] "Starting node config controller"
	I0831 22:06:58.341815       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:06:58.441329       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0831 22:06:58.441395       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:06:58.442401       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c93bec0e9f60] <==
	E0831 22:06:48.703327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0831 22:06:48.703284       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0831 22:06:48.703692       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0831 22:06:48.704169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:48.704356       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0831 22:06:48.704394       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0831 22:06:48.704401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0831 22:06:48.704696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:49.553930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0831 22:06:49.553976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:49.632156       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0831 22:06:49.632206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:49.651761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0831 22:06:49.651817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:49.776978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0831 22:06:49.777029       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:49.876825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0831 22:06:49.876870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:49.895515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0831 22:06:49.895562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:49.905402       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0831 22:06:49.905473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:49.926972       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0831 22:06:49.927019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0831 22:06:50.300482       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sun 2024-08-25 03:11:38 UTC, end at Sat 2024-08-31 22:18:15 UTC. --
	Aug 31 22:17:44 ubuntu-20-agent-9 kubelet[127458]: E0831 22:17:44.378116  127458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-wxf5m_gadget(cc825ff0-eaed-44bb-bea0-5a93e3561356)\"" pod="gadget/gadget-wxf5m" podUID="cc825ff0-eaed-44bb-bea0-5a93e3561356"
	Aug 31 22:17:45 ubuntu-20-agent-9 kubelet[127458]: E0831 22:17:45.380021  127458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="2c1b41bc-22e1-4a4c-9262-afb834fcb7fb"
	Aug 31 22:17:51 ubuntu-20-agent-9 kubelet[127458]: E0831 22:17:51.539906  127458 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" image="gcr.io/k8s-minikube/busybox:latest"
	Aug 31 22:17:51 ubuntu-20-agent-9 kubelet[127458]: E0831 22:17:51.540125  127458 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:registry-test,Image:gcr.io/k8s-minikube/busybox,Command:[],Args:[sh -c wget --spider -S http://registry.kube-system.svc.cluster.local],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qx9rv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:true,StdinOnce:true,TTY:true,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod registry-test_default(198651b6-d4ce-4a00-9d4b-c9f5587e4413): ErrImagePull: Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" logger="UnhandledError"
	Aug 31 22:17:51 ubuntu-20-agent-9 kubelet[127458]: E0831 22:17:51.541402  127458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ErrImagePull: \"Error response from daemon: Head \\\"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\\\": unauthorized: authentication failed\"" pod="default/registry-test" podUID="198651b6-d4ce-4a00-9d4b-c9f5587e4413"
	Aug 31 22:17:57 ubuntu-20-agent-9 kubelet[127458]: I0831 22:17:57.377887  127458 scope.go:117] "RemoveContainer" containerID="a732ef3abe0661aac6d815d616e5aaf71bd67ba96e5c183e33785376004a719e"
	Aug 31 22:17:57 ubuntu-20-agent-9 kubelet[127458]: E0831 22:17:57.378145  127458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-wxf5m_gadget(cc825ff0-eaed-44bb-bea0-5a93e3561356)\"" pod="gadget/gadget-wxf5m" podUID="cc825ff0-eaed-44bb-bea0-5a93e3561356"
	Aug 31 22:17:58 ubuntu-20-agent-9 kubelet[127458]: E0831 22:17:58.379086  127458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="2c1b41bc-22e1-4a4c-9262-afb834fcb7fb"
	Aug 31 22:18:06 ubuntu-20-agent-9 kubelet[127458]: E0831 22:18:06.379809  127458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="198651b6-d4ce-4a00-9d4b-c9f5587e4413"
	Aug 31 22:18:10 ubuntu-20-agent-9 kubelet[127458]: E0831 22:18:10.379983  127458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="2c1b41bc-22e1-4a4c-9262-afb834fcb7fb"
	Aug 31 22:18:11 ubuntu-20-agent-9 kubelet[127458]: I0831 22:18:11.378820  127458 scope.go:117] "RemoveContainer" containerID="a732ef3abe0661aac6d815d616e5aaf71bd67ba96e5c183e33785376004a719e"
	Aug 31 22:18:11 ubuntu-20-agent-9 kubelet[127458]: E0831 22:18:11.379070  127458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-wxf5m_gadget(cc825ff0-eaed-44bb-bea0-5a93e3561356)\"" pod="gadget/gadget-wxf5m" podUID="cc825ff0-eaed-44bb-bea0-5a93e3561356"
	Aug 31 22:18:13 ubuntu-20-agent-9 kubelet[127458]: I0831 22:18:13.921203  127458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx9rv\" (UniqueName: \"kubernetes.io/projected/198651b6-d4ce-4a00-9d4b-c9f5587e4413-kube-api-access-qx9rv\") pod \"198651b6-d4ce-4a00-9d4b-c9f5587e4413\" (UID: \"198651b6-d4ce-4a00-9d4b-c9f5587e4413\") "
	Aug 31 22:18:13 ubuntu-20-agent-9 kubelet[127458]: I0831 22:18:13.921279  127458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/198651b6-d4ce-4a00-9d4b-c9f5587e4413-gcp-creds\") pod \"198651b6-d4ce-4a00-9d4b-c9f5587e4413\" (UID: \"198651b6-d4ce-4a00-9d4b-c9f5587e4413\") "
	Aug 31 22:18:13 ubuntu-20-agent-9 kubelet[127458]: I0831 22:18:13.921373  127458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/198651b6-d4ce-4a00-9d4b-c9f5587e4413-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "198651b6-d4ce-4a00-9d4b-c9f5587e4413" (UID: "198651b6-d4ce-4a00-9d4b-c9f5587e4413"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 31 22:18:13 ubuntu-20-agent-9 kubelet[127458]: I0831 22:18:13.923734  127458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/198651b6-d4ce-4a00-9d4b-c9f5587e4413-kube-api-access-qx9rv" (OuterVolumeSpecName: "kube-api-access-qx9rv") pod "198651b6-d4ce-4a00-9d4b-c9f5587e4413" (UID: "198651b6-d4ce-4a00-9d4b-c9f5587e4413"). InnerVolumeSpecName "kube-api-access-qx9rv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:18:14 ubuntu-20-agent-9 kubelet[127458]: I0831 22:18:14.022567  127458 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qx9rv\" (UniqueName: \"kubernetes.io/projected/198651b6-d4ce-4a00-9d4b-c9f5587e4413-kube-api-access-qx9rv\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Aug 31 22:18:14 ubuntu-20-agent-9 kubelet[127458]: I0831 22:18:14.022618  127458 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/198651b6-d4ce-4a00-9d4b-c9f5587e4413-gcp-creds\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Aug 31 22:18:14 ubuntu-20-agent-9 kubelet[127458]: I0831 22:18:14.235545  127458 scope.go:117] "RemoveContainer" containerID="28f3ee6ad526cee7ce4f154e02b9de8bcdee70c026b084aa6c2a7415ea379013"
	Aug 31 22:18:14 ubuntu-20-agent-9 kubelet[127458]: I0831 22:18:14.327587  127458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrb6x\" (UniqueName: \"kubernetes.io/projected/d885b228-b6fd-46cb-8255-e4f053cab565-kube-api-access-nrb6x\") pod \"d885b228-b6fd-46cb-8255-e4f053cab565\" (UID: \"d885b228-b6fd-46cb-8255-e4f053cab565\") "
	Aug 31 22:18:14 ubuntu-20-agent-9 kubelet[127458]: I0831 22:18:14.330839  127458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d885b228-b6fd-46cb-8255-e4f053cab565-kube-api-access-nrb6x" (OuterVolumeSpecName: "kube-api-access-nrb6x") pod "d885b228-b6fd-46cb-8255-e4f053cab565" (UID: "d885b228-b6fd-46cb-8255-e4f053cab565"). InnerVolumeSpecName "kube-api-access-nrb6x". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:18:14 ubuntu-20-agent-9 kubelet[127458]: I0831 22:18:14.428145  127458 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nrb6x\" (UniqueName: \"kubernetes.io/projected/d885b228-b6fd-46cb-8255-e4f053cab565-kube-api-access-nrb6x\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Aug 31 22:18:14 ubuntu-20-agent-9 kubelet[127458]: I0831 22:18:14.528548  127458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g76g\" (UniqueName: \"kubernetes.io/projected/60f8afd4-c385-4f32-966b-848604e750b8-kube-api-access-5g76g\") pod \"60f8afd4-c385-4f32-966b-848604e750b8\" (UID: \"60f8afd4-c385-4f32-966b-848604e750b8\") "
	Aug 31 22:18:14 ubuntu-20-agent-9 kubelet[127458]: I0831 22:18:14.530704  127458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60f8afd4-c385-4f32-966b-848604e750b8-kube-api-access-5g76g" (OuterVolumeSpecName: "kube-api-access-5g76g") pod "60f8afd4-c385-4f32-966b-848604e750b8" (UID: "60f8afd4-c385-4f32-966b-848604e750b8"). InnerVolumeSpecName "kube-api-access-5g76g". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:18:14 ubuntu-20-agent-9 kubelet[127458]: I0831 22:18:14.629545  127458 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5g76g\" (UniqueName: \"kubernetes.io/projected/60f8afd4-c385-4f32-966b-848604e750b8-kube-api-access-5g76g\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	
	
	==> storage-provisioner [ac698f11498f] <==
	I0831 22:06:59.735494       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0831 22:06:59.746723       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0831 22:06:59.746843       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0831 22:06:59.757717       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0831 22:06:59.757987       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_cb368b1a-896f-4847-be4f-8cfaedfa0c88!
	I0831 22:06:59.758051       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ccaa6e6a-8f65-406e-a497-c4b064f9759a", APIVersion:"v1", ResourceVersion:"701", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-9_cb368b1a-896f-4847-be4f-8cfaedfa0c88 became leader
	I0831 22:06:59.858419       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_cb368b1a-896f-4847-be4f-8cfaedfa0c88!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:262: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: busybox
helpers_test.go:275: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:283: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-9/10.154.0.4
	Start Time:       Sat, 31 Aug 2024 22:09:01 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lpj88 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lpj88:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-9
	  Normal   Pulling    7m43s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m43s (x4 over 9m13s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m43s (x4 over 9m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m30s (x6 over 9m12s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m5s (x20 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:286: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:287: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.96s)

                                                
                                    

Test pass (105/168)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 2.57
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 1.56
15 TestDownloadOnly/v1.31.0/binaries 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.12
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
22 TestOffline 42.86
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 103.78
29 TestAddons/serial/Volcano 39.59
31 TestAddons/serial/GCPAuth/Namespaces 0.13
35 TestAddons/parallel/InspektorGadget 10.5
36 TestAddons/parallel/MetricsServer 5.39
37 TestAddons/parallel/HelmTiller 9.83
39 TestAddons/parallel/CSI 49.47
40 TestAddons/parallel/Headlamp 16.92
41 TestAddons/parallel/CloudSpanner 6.27
43 TestAddons/parallel/NvidiaDevicePlugin 6.24
44 TestAddons/parallel/Yakd 10.44
45 TestAddons/StoppedEnableDisable 10.76
47 TestCertExpiration 228.72
58 TestFunctional/serial/CopySyncFile 0
59 TestFunctional/serial/StartWithProxy 29.33
60 TestFunctional/serial/AuditLog 0
61 TestFunctional/serial/SoftStart 32.21
62 TestFunctional/serial/KubeContext 0.05
63 TestFunctional/serial/KubectlGetPods 0.08
65 TestFunctional/serial/MinikubeKubectlCmd 0.11
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
67 TestFunctional/serial/ExtraConfig 39.19
68 TestFunctional/serial/ComponentHealth 0.07
69 TestFunctional/serial/LogsCmd 0.86
70 TestFunctional/serial/LogsFileCmd 0.9
71 TestFunctional/serial/InvalidService 5.14
73 TestFunctional/parallel/ConfigCmd 0.28
74 TestFunctional/parallel/DashboardCmd 4.2
75 TestFunctional/parallel/DryRun 0.16
76 TestFunctional/parallel/InternationalLanguage 0.09
77 TestFunctional/parallel/StatusCmd 0.46
80 TestFunctional/parallel/ProfileCmd/profile_not_create 0.24
81 TestFunctional/parallel/ProfileCmd/profile_list 0.23
82 TestFunctional/parallel/ProfileCmd/profile_json_output 0.23
84 TestFunctional/parallel/ServiceCmd/DeployApp 10.15
85 TestFunctional/parallel/ServiceCmd/List 0.35
86 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
87 TestFunctional/parallel/ServiceCmd/HTTPS 0.15
88 TestFunctional/parallel/ServiceCmd/Format 0.15
89 TestFunctional/parallel/ServiceCmd/URL 0.16
90 TestFunctional/parallel/ServiceCmdConnect 7.33
91 TestFunctional/parallel/AddonsCmd 0.12
92 TestFunctional/parallel/PersistentVolumeClaim 24.09
105 TestFunctional/parallel/MySQL 22.48
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 13.82
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 13.96
114 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/Version/short 0.05
119 TestFunctional/parallel/Version/components 0.24
120 TestFunctional/parallel/License 0.95
121 TestFunctional/delete_echo-server_images 0.03
122 TestFunctional/delete_my-image_image 0.02
123 TestFunctional/delete_minikube_cached_images 0.02
128 TestImageBuild/serial/Setup 14.53
129 TestImageBuild/serial/NormalBuild 2.87
130 TestImageBuild/serial/BuildWithBuildArg 1
131 TestImageBuild/serial/BuildWithDockerIgnore 0.78
132 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.67
136 TestJSONOutput/start/Command 25.52
137 TestJSONOutput/start/Audit 0
139 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
140 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
142 TestJSONOutput/pause/Command 0.55
143 TestJSONOutput/pause/Audit 0
145 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/unpause/Command 0.44
149 TestJSONOutput/unpause/Audit 0
151 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/stop/Command 5.32
155 TestJSONOutput/stop/Audit 0
157 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
159 TestErrorJSONOutput 0.21
164 TestMainNoArgs 0.05
165 TestMinikubeProfile 35.22
174 TestPause/serial/Start 30.04
175 TestPause/serial/SecondStartNoReconfiguration 30.58
176 TestPause/serial/Pause 0.52
177 TestPause/serial/VerifyStatus 0.14
178 TestPause/serial/Unpause 0.44
179 TestPause/serial/PauseAgain 0.56
180 TestPause/serial/DeletePaused 1.84
181 TestPause/serial/VerifyDeletedResources 0.07
195 TestRunningBinaryUpgrade 77.08
197 TestStoppedBinaryUpgrade/Setup 2.27
198 TestStoppedBinaryUpgrade/Upgrade 50.93
199 TestStoppedBinaryUpgrade/MinikubeLogs 0.84
200 TestKubernetesUpgrade 323.51
x
+
TestDownloadOnly/v1.20.0/json-events (2.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (2.568043611s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (2.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (62.279669ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:05:49
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:05:49.644268  122314 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:05:49.644899  122314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:05:49.644919  122314 out.go:358] Setting ErrFile to fd 2...
	I0831 22:05:49.644927  122314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:05:49.645406  122314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-115525/.minikube/bin
	W0831 22:05:49.645806  122314 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18943-115525/.minikube/config/config.json: open /home/jenkins/minikube-integration/18943-115525/.minikube/config/config.json: no such file or directory
	I0831 22:05:49.646585  122314 out.go:352] Setting JSON to true
	I0831 22:05:49.647565  122314 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6490,"bootTime":1725135460,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:05:49.647656  122314 start.go:139] virtualization: kvm guest
	I0831 22:05:49.650150  122314 out.go:97] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0831 22:05:49.650280  122314 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/18943-115525/.minikube/cache/preloaded-tarball: no such file or directory
	I0831 22:05:49.650324  122314 notify.go:220] Checking for updates...
	I0831 22:05:49.651689  122314 out.go:169] MINIKUBE_LOCATION=18943
	I0831 22:05:49.653221  122314 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:05:49.654610  122314 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18943-115525/kubeconfig
	I0831 22:05:49.655992  122314 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-115525/.minikube
	I0831 22:05:49.657385  122314 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (1.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.556184208s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (1.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
--- PASS: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (60.949395ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:05:52
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:05:52.531245  122466 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:05:52.531521  122466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:05:52.531532  122466 out.go:358] Setting ErrFile to fd 2...
	I0831 22:05:52.531536  122466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:05:52.531755  122466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-115525/.minikube/bin
	I0831 22:05:52.532386  122466 out.go:352] Setting JSON to true
	I0831 22:05:52.533320  122466 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6493,"bootTime":1725135460,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:05:52.533388  122466 start.go:139] virtualization: kvm guest
	I0831 22:05:52.535247  122466 out.go:97] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0831 22:05:52.535378  122466 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/18943-115525/.minikube/cache/preloaded-tarball: no such file or directory
	I0831 22:05:52.535465  122466 notify.go:220] Checking for updates...
	I0831 22:05:52.536712  122466 out.go:169] MINIKUBE_LOCATION=18943
	I0831 22:05:52.538367  122466 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:05:52.540039  122466 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18943-115525/kubeconfig
	I0831 22:05:52.541677  122466 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-115525/.minikube
	I0831 22:05:52.542960  122466 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:45795 --driver=none --bootstrapper=kubeadm
helpers_test.go:176: Cleaning up "minikube" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (42.86s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (41.123271963s)
helpers_test.go:176: Cleaning up "minikube" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.732493459s)
--- PASS: TestOffline (42.86s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (52.54674ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (51.069729ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (103.78s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller: (1m43.782983177s)
--- PASS: TestAddons/Setup (103.78s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.59s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 10.378756ms
addons_test.go:913: volcano-controller stabilized in 10.708753ms
addons_test.go:897: volcano-scheduler stabilized in 10.897059ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:345: "volcano-scheduler-576bc46687-vrzx2" [abcf169d-2100-4cc6-8a33-e7f1c1c0b65f] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004617794s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:345: "volcano-admission-77d7d48b68-69wrr" [ab42ace0-2e6a-4824-8879-3e7a6c9e4d0e] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004097853s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:345: "volcano-controllers-56675bb4d5-8b69h" [8be2bc20-5a28-494f-a4d1-9bca810ffb2d] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003804434s
addons_test.go:932: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:345: "test-job-nginx-0" [8a71c3d8-55df-4cc3-8bb2-566792ad4a62] Pending
helpers_test.go:345: "test-job-nginx-0" [8a71c3d8-55df-4cc3-8bb2-566792ad4a62] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:345: "test-job-nginx-0" [8a71c3d8-55df-4cc3-8bb2-566792ad4a62] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003520104s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.256583378s)
--- PASS: TestAddons/serial/Volcano (39.59s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.5s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:345: "gadget-wxf5m" [cc825ff0-eaed-44bb-bea0-5a93e3561356] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004347598s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.490205304s)
--- PASS: TestAddons/parallel/InspektorGadget (10.50s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.189121ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:345: "metrics-server-84c5f94fbc-q2plw" [b4475d05-68d0-4661-ad2a-9dfc22b3badf] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003580421s
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.39s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.83s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.011935ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:345: "tiller-deploy-b48cc5f79-tb4n6" [8f4f84ee-4dff-427a-b8a8-718a8e75f67c] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003973074s
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.53343786s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.581077ms
addons_test.go:570: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:345: "task-pv-pod" [6d3ba3c3-bdec-4d69-84aa-6f734374ac03] Pending
helpers_test.go:345: "task-pv-pod" [6d3ba3c3-bdec-4d69-84aa-6f734374ac03] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:345: "task-pv-pod" [6d3ba3c3-bdec-4d69-84aa-6f734374ac03] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003681998s
addons_test.go:590: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:420: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:420: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:345: "task-pv-pod-restore" [98436edd-f4e5-441c-ba58-2044f53b8454] Pending
helpers_test.go:345: "task-pv-pod-restore" [98436edd-f4e5-441c-ba58-2044f53b8454] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:345: "task-pv-pod-restore" [98436edd-f4e5-441c-ba58-2044f53b8454] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003797493s
addons_test.go:632: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.315763404s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (49.47s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:345: "headlamp-57fb76fcdb-rgvd7" [7b34c595-77ad-4386-aca6-96ff24350e2f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:345: "headlamp-57fb76fcdb-rgvd7" [7b34c595-77ad-4386-aca6-96ff24350e2f] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003905657s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.411978143s)
--- PASS: TestAddons/parallel/Headlamp (16.92s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:345: "cloud-spanner-emulator-769b77f747-h7b8f" [15f83c02-87ce-41a1-8231-0ceda1b0d206] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004199846s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (6.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:345: "nvidia-device-plugin-daemonset-t2ghz" [7baf585d-317d-48b6-b82a-cb92cb62c9a2] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004036811s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:345: "yakd-dashboard-67d98fc6b-9ppm2" [4c517c16-1c64-4e5e-a10f-67b5e53af5a5] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004287629s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.439249265s)
--- PASS: TestAddons/parallel/Yakd (10.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.76s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.429571125s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.76s)

                                                
                                    
x
+
TestCertExpiration (228.72s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.721162614s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (32.216889963s)
helpers_test.go:176: Cleaning up "minikube" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.783166505s)
--- PASS: TestCertExpiration (228.72s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/18943-115525/.minikube/files/etc/test/nested/copy/122302/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (29.33s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (29.331922382s)
--- PASS: TestFunctional/serial/StartWithProxy (29.33s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (32.21378194s)
functional_test.go:663: soft start took 32.214437637s for "minikube" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.192505685s)
functional_test.go:761: restart took 39.192636473s for "minikube" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.19s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.86s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd3225096231/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.90s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.14s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (165.764835ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL           |
	|-----------|-------------|-------------|-------------------------|
	| default   | invalid-svc |          80 | http://10.154.0.4:32661 |
	|-----------|-------------|-------------|-------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context minikube delete -f testdata/invalidsvc.yaml: (1.78213779s)
--- PASS: TestFunctional/serial/InvalidService (5.14s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (44.642189ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (45.684135ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (4.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/08/31 22:26:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:509: unable to kill pid 157872: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (83.134429ms)

                                                
                                                
-- stdout --
	* minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-115525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-115525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:26:03.813010  158230 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:26:03.813261  158230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:26:03.813270  158230 out.go:358] Setting ErrFile to fd 2...
	I0831 22:26:03.813275  158230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:26:03.813464  158230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-115525/.minikube/bin
	I0831 22:26:03.814092  158230 out.go:352] Setting JSON to false
	I0831 22:26:03.815123  158230 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7704,"bootTime":1725135460,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:26:03.815194  158230 start.go:139] virtualization: kvm guest
	I0831 22:26:03.817760  158230 out.go:177] * minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0831 22:26:03.819229  158230 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/18943-115525/.minikube/cache/preloaded-tarball: no such file or directory
	I0831 22:26:03.819279  158230 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:26:03.819286  158230 notify.go:220] Checking for updates...
	I0831 22:26:03.820818  158230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:26:03.822176  158230 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-115525/kubeconfig
	I0831 22:26:03.823437  158230 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-115525/.minikube
	I0831 22:26:03.824760  158230 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:26:03.826117  158230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:26:03.827819  158230 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:26:03.828171  158230 exec_runner.go:51] Run: systemctl --version
	I0831 22:26:03.831138  158230 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:26:03.842130  158230 out.go:177] * Using the none driver based on existing profile
	I0831 22:26:03.843437  158230 start.go:297] selected driver: none
	I0831 22:26:03.843454  158230 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:26:03.843614  158230 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:26:03.843641  158230 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0831 22:26:03.843930  158230 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0831 22:26:03.846265  158230 out.go:201] 
	W0831 22:26:03.847417  158230 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0831 22:26:03.848607  158230 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (86.337369ms)

                                                
                                                
-- stdout --
	* minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-115525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-115525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:26:03.976654  158260 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:26:03.976780  158260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:26:03.976790  158260 out.go:358] Setting ErrFile to fd 2...
	I0831 22:26:03.976795  158260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:26:03.977084  158260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-115525/.minikube/bin
	I0831 22:26:03.977697  158260 out.go:352] Setting JSON to false
	I0831 22:26:03.978712  158260 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7704,"bootTime":1725135460,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0831 22:26:03.978812  158260 start.go:139] virtualization: kvm guest
	I0831 22:26:03.980755  158260 out.go:177] * minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0831 22:26:03.982014  158260 out.go:177]   - MINIKUBE_LOCATION=18943
	W0831 22:26:03.982007  158260 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/18943-115525/.minikube/cache/preloaded-tarball: no such file or directory
	I0831 22:26:03.982076  158260 notify.go:220] Checking for updates...
	I0831 22:26:03.984365  158260 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:26:03.985886  158260 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-115525/kubeconfig
	I0831 22:26:03.987274  158260 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-115525/.minikube
	I0831 22:26:03.988524  158260 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0831 22:26:03.989963  158260 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:26:03.991835  158260 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:26:03.992312  158260 exec_runner.go:51] Run: systemctl --version
	I0831 22:26:03.995084  158260 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:26:04.007798  158260 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0831 22:26:04.008997  158260 start.go:297] selected driver: none
	I0831 22:26:04.009027  158260 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:26:04.009177  158260 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:26:04.009208  158260 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0831 22:26:04.009653  158260 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0831 22:26:04.011898  158260 out.go:201] 
	W0831 22:26:04.013122  158260 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0831 22:26:04.014367  158260 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "183.632816ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "50.950792ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "184.180788ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "48.240688ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:345: "hello-node-6b9f76b5c7-pxh5b" [6940b567-f536-49da-9983-7817c24c9484] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:345: "hello-node-6b9f76b5c7-pxh5b" [6940b567-f536-49da-9983-7817c24c9484] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003561736s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "341.605651ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.154.0.4:32490
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.154.0.4:32490
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:345: "hello-node-connect-67bdd5bbb4-g4jcf" [52336f0c-fd2d-4f6a-a55a-798677229bc6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:345: "hello-node-connect-67bdd5bbb4-g4jcf" [52336f0c-fd2d-4f6a-a55a-798677229bc6] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003621834s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.154.0.4:31851
functional_test.go:1675: http://10.154.0.4:31851: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-g4jcf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.154.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.154.0.4:31851
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.33s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:345: "storage-provisioner" [3712e00c-3aa2-4546-bf44-655ff8227eb2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003991733s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:345: "sp-pod" [e0c77c2d-4314-48c6-b71e-800f879bd026] Pending
helpers_test.go:345: "sp-pod" [e0c77c2d-4314-48c6-b71e-800f879bd026] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:345: "sp-pod" [e0c77c2d-4314-48c6-b71e-800f879bd026] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003875288s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml: (1.338293857s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:345: "sp-pod" [20b45fdf-b5fe-4cc4-8ab5-54d0d86c92b6] Pending
helpers_test.go:345: "sp-pod" [20b45fdf-b5fe-4cc4-8ab5-54d0d86c92b6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:345: "sp-pod" [20b45fdf-b5fe-4cc4-8ab5-54d0d86c92b6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003603818s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.09s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:345: "mysql-6cdb49bbb-qp864" [7d2f689f-da89-44ce-a995-529b17e654b5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:345: "mysql-6cdb49bbb-qp864" [7d2f689f-da89-44ce-a995-529b17e654b5] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.003341996s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-qp864 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-qp864 -- mysql -ppassword -e "show databases;": exit status 1 (129.340361ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-qp864 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-qp864 -- mysql -ppassword -e "show databases;": exit status 1 (111.214553ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-qp864 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-qp864 -- mysql -ppassword -e "show databases;": exit status 1 (113.648371ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-qp864 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.48s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.816669312s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.82s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (13.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.96268622s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (13.96s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.95s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.530569752s)
--- PASS: TestImageBuild/serial/Setup (14.53s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (2.87143645s)
--- PASS: TestImageBuild/serial/NormalBuild (2.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.00s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.78s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (25.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (25.51519841s)
--- PASS: TestJSONOutput/start/Command (25.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.32s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (5.322463764s)
--- PASS: TestJSONOutput/stop/Command (5.32s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.033633ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5a4c4ad5-9a7b-4939-b621-895b60fe2ee6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a88dad02-35de-4644-8d58-15fe8c59f929","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18943"}}
	{"specversion":"1.0","id":"2465cb0a-faee-4799-8264-a47e02382021","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cdf6caa8-abf4-4021-9aec-d9d571a725cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18943-115525/kubeconfig"}}
	{"specversion":"1.0","id":"1c13ba3c-fe2b-4af5-bcc2-4b49837a5026","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-115525/.minikube"}}
	{"specversion":"1.0","id":"35c06ce8-8832-4be1-a5ab-2a72eae879e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"829ffa92-f0ff-4a18-8be9-d035fdeaaff4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ea86e9f8-e281-46f7-a125-cc59eba74e88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "minikube" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (35.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (15.272748163s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.983045663s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "minikube" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.307903621s)
helpers_test.go:176: Cleaning up "minikube" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (35.22s)

                                                
                                    
x
+
TestPause/serial/Start (30.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (30.0350998s)
--- PASS: TestPause/serial/Start (30.04s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.58s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (30.580101727s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.58s)

                                                
                                    
x
+
TestPause/serial/Pause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.52s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
helpers_test.go:700: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (143.979458ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.14s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.44s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.44s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.56s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.56s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.84s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.840213486s)
--- PASS: TestPause/serial/DeletePaused (1.84s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (77.08s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1042192889 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1042192889 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (34.785055093s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (36.057486808s)
helpers_test.go:176: Cleaning up "minikube" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.37733732s)
--- PASS: TestRunningBinaryUpgrade (77.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (50.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1256371885 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1256371885 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (15.202642435s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1256371885 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1256371885 -p minikube stop: (23.65749773s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.073282642s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (50.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (323.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (33.737454728s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.347574018s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (78.472063ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m19.461711067s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (69.6372ms)

                                                
                                                
-- stdout --
	* minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-115525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-115525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.387145625s)
helpers_test.go:176: Cleaning up "minikube" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.37243454s)
--- PASS: TestKubernetesUpgrade (323.51s)

                                                
                                    

Test skip (62/168)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.0/preload-exists 0
14 TestDownloadOnly/v1.31.0/cached-images 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
38 TestAddons/parallel/Olm 0
42 TestAddons/parallel/LocalPath 0
46 TestCertOptions 0
48 TestDockerFlags 0
49 TestForceSystemdFlag 0
50 TestForceSystemdEnv 0
51 TestDockerEnvContainerd 0
52 TestKVMDriverInstallOrUpdate 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
55 TestErrorSpam 0
64 TestFunctional/serial/CacheCmd 0
78 TestFunctional/parallel/MountCmd 0
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
97 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
98 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
99 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
100 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
101 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
102 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
103 TestFunctional/parallel/SSHCmd 0
104 TestFunctional/parallel/CpCmd 0
106 TestFunctional/parallel/FileSync 0
107 TestFunctional/parallel/CertSync 0
112 TestFunctional/parallel/DockerEnv 0
113 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/ImageCommands 0
116 TestFunctional/parallel/NonActiveRuntimeDisabled 0
124 TestGvisorAddon 0
125 TestMultiControlPlane 0
133 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
160 TestKicCustomNetwork 0
161 TestKicExistingNetwork 0
162 TestKicCustomSubnet 0
163 TestKicStaticIP 0
166 TestMountStart 0
167 TestContainerIPsMultiNetwork 0
168 TestMultiNode 0
169 TestNetworkPlugins 0
170 TestNoKubernetes 0
171 TestChangeNoneUser 0
182 TestPreload 0
183 TestScheduledStopWindows 0
184 TestScheduledStopUnix 0
185 TestSkaffold 0
188 TestStartStop/group/old-k8s-version 0.14
189 TestStartStop/group/newest-cni 0.13
190 TestStartStop/group/default-k8s-diff-port 0.14
191 TestStartStop/group/no-preload 0.13
192 TestStartStop/group/disable-driver-mounts 0.13
193 TestStartStop/group/embed-certs 0.13
194 TestInsufficientStorage 0
201 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:198: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:978: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork (0s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork
multinetwork_test.go:43: running with runtime:docker goos:linux goarch:amd64
multinetwork_test.go:45: skipping: only docker driver supported
--- SKIP: TestContainerIPsMultiNetwork (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:176: Cleaning up "minikube" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:176: Cleaning up "minikube" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:176: Cleaning up "minikube" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:176: Cleaning up "minikube" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:176: Cleaning up "minikube" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:176: Cleaning up "minikube" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.13s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard