Test Report: none_Linux 19529

                    
                      d7f9f66bdcb95e27f1005d5ce9d414c92a72aaf8:2024-08-28:35983
                    
                

Test fail (1/168)

Order failed test Duration
33 TestAddons/parallel/Registry 71.73
x
+
TestAddons/parallel/Registry (71.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.567618ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-rtp2b" [2c727f7b-0cf9-4843-a060-78e13883fe27] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002964795s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-v7gq6" [4d2021b8-55cb-4260-88fc-edac5c2173d8] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00343927s
addons_test.go:342: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.076676838s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/08/28 17:03:23 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:35429               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:53 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.33.1 | 28 Aug 24 16:54 UTC | 28 Aug 24 16:54 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 16:51:51
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 16:51:51.231414   20719 out.go:345] Setting OutFile to fd 1 ...
	I0828 16:51:51.231860   20719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:51.231873   20719 out.go:358] Setting ErrFile to fd 2...
	I0828 16:51:51.231880   20719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:51.232335   20719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10135/.minikube/bin
	I0828 16:51:51.233317   20719 out.go:352] Setting JSON to false
	I0828 16:51:51.234183   20719 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2058,"bootTime":1724861853,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 16:51:51.234244   20719 start.go:139] virtualization: kvm guest
	I0828 16:51:51.236281   20719 out.go:177] * minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 16:51:51.237558   20719 out.go:177]   - MINIKUBE_LOCATION=19529
	W0828 16:51:51.237544   20719 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19529-10135/.minikube/cache/preloaded-tarball: no such file or directory
	I0828 16:51:51.237629   20719 notify.go:220] Checking for updates...
	I0828 16:51:51.239827   20719 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 16:51:51.241048   20719 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10135/kubeconfig
	I0828 16:51:51.242264   20719 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10135/.minikube
	I0828 16:51:51.243599   20719 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 16:51:51.244831   20719 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 16:51:51.246024   20719 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 16:51:51.256430   20719 out.go:177] * Using the none driver based on user configuration
	I0828 16:51:51.257598   20719 start.go:297] selected driver: none
	I0828 16:51:51.257615   20719 start.go:901] validating driver "none" against <nil>
	I0828 16:51:51.257625   20719 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 16:51:51.257650   20719 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0828 16:51:51.257919   20719 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0828 16:51:51.258407   20719 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 16:51:51.258608   20719 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 16:51:51.258669   20719 cni.go:84] Creating CNI manager for ""
	I0828 16:51:51.258684   20719 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 16:51:51.258693   20719 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 16:51:51.258733   20719 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 16:51:51.260156   20719 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0828 16:51:51.261613   20719 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/config.json ...
	I0828 16:51:51.261643   20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/config.json: {Name:mk6eedcd645d9a68ea0fc579a6e53955b25745c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:51:51.261767   20719 start.go:360] acquireMachinesLock for minikube: {Name:mka1638c483578dda3e9e32334e7fbf26da86364 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 16:51:51.261795   20719 start.go:364] duration metric: took 16.183µs to acquireMachinesLock for "minikube"
	I0828 16:51:51.261807   20719 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 16:51:51.261861   20719 start.go:125] createHost starting for "" (driver="none")
	I0828 16:51:51.263199   20719 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0828 16:51:51.264236   20719 exec_runner.go:51] Run: systemctl --version
	I0828 16:51:51.266620   20719 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0828 16:51:51.266649   20719 client.go:168] LocalClient.Create starting
	I0828 16:51:51.266694   20719 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10135/.minikube/certs/ca.pem
	I0828 16:51:51.266721   20719 main.go:141] libmachine: Decoding PEM data...
	I0828 16:51:51.266734   20719 main.go:141] libmachine: Parsing certificate...
	I0828 16:51:51.266782   20719 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10135/.minikube/certs/cert.pem
	I0828 16:51:51.266804   20719 main.go:141] libmachine: Decoding PEM data...
	I0828 16:51:51.266817   20719 main.go:141] libmachine: Parsing certificate...
	I0828 16:51:51.267098   20719 client.go:171] duration metric: took 443.007µs to LocalClient.Create
	I0828 16:51:51.267121   20719 start.go:167] duration metric: took 502.007µs to libmachine.API.Create "minikube"
	I0828 16:51:51.267129   20719 start.go:293] postStartSetup for "minikube" (driver="none")
	I0828 16:51:51.267166   20719 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 16:51:51.267194   20719 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 16:51:51.275957   20719 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0828 16:51:51.275975   20719 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0828 16:51:51.275983   20719 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0828 16:51:51.277663   20719 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0828 16:51:51.278861   20719 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10135/.minikube/addons for local assets ...
	I0828 16:51:51.278908   20719 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10135/.minikube/files for local assets ...
	I0828 16:51:51.278944   20719 start.go:296] duration metric: took 11.810407ms for postStartSetup
	I0828 16:51:51.279543   20719 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/config.json ...
	I0828 16:51:51.279669   20719 start.go:128] duration metric: took 17.800033ms to createHost
	I0828 16:51:51.279677   20719 start.go:83] releasing machines lock for "minikube", held for 17.874919ms
	I0828 16:51:51.280071   20719 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0828 16:51:51.280167   20719 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0828 16:51:51.281936   20719 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 16:51:51.281998   20719 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 16:51:51.292092   20719 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0828 16:51:51.292120   20719 start.go:495] detecting cgroup driver to use...
	I0828 16:51:51.292147   20719 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0828 16:51:51.292247   20719 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 16:51:51.310228   20719 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0828 16:51:51.318322   20719 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0828 16:51:51.326110   20719 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0828 16:51:51.326159   20719 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0828 16:51:51.334638   20719 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 16:51:51.342598   20719 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0828 16:51:51.351019   20719 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 16:51:51.359543   20719 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 16:51:51.367618   20719 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0828 16:51:51.375653   20719 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0828 16:51:51.383473   20719 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0828 16:51:51.391495   20719 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 16:51:51.399194   20719 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 16:51:51.407182   20719 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0828 16:51:51.618874   20719 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0828 16:51:51.684008   20719 start.go:495] detecting cgroup driver to use...
	I0828 16:51:51.684061   20719 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0828 16:51:51.684179   20719 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 16:51:51.702599   20719 exec_runner.go:51] Run: which cri-dockerd
	I0828 16:51:51.703469   20719 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0828 16:51:51.710580   20719 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0828 16:51:51.710602   20719 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0828 16:51:51.710637   20719 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0828 16:51:51.717255   20719 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0828 16:51:51.717386   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2133849740 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0828 16:51:51.724625   20719 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0828 16:51:51.942585   20719 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0828 16:51:52.163123   20719 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0828 16:51:52.163277   20719 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0828 16:51:52.163295   20719 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0828 16:51:52.163337   20719 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0828 16:51:52.171067   20719 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0828 16:51:52.171189   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1923619406 /etc/docker/daemon.json
	I0828 16:51:52.179377   20719 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0828 16:51:52.386501   20719 exec_runner.go:51] Run: sudo systemctl restart docker
	I0828 16:51:52.686175   20719 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0828 16:51:52.696560   20719 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0828 16:51:52.710614   20719 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 16:51:52.720447   20719 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0828 16:51:52.942591   20719 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0828 16:51:53.161219   20719 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0828 16:51:53.372571   20719 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0828 16:51:53.385665   20719 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 16:51:53.395311   20719 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0828 16:51:53.610955   20719 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0828 16:51:53.675414   20719 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0828 16:51:53.675471   20719 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0828 16:51:53.677136   20719 start.go:563] Will wait 60s for crictl version
	I0828 16:51:53.677179   20719 exec_runner.go:51] Run: which crictl
	I0828 16:51:53.678007   20719 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0828 16:51:53.708425   20719 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0828 16:51:53.708494   20719 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0828 16:51:53.729056   20719 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0828 16:51:53.751553   20719 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0828 16:51:53.751635   20719 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0828 16:51:53.754356   20719 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0828 16:51:53.755538   20719 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 16:51:53.755643   20719 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 16:51:53.755659   20719 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.0 docker true true} ...
	I0828 16:51:53.755771   20719 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0828 16:51:53.755814   20719 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0828 16:51:53.799394   20719 cni.go:84] Creating CNI manager for ""
	I0828 16:51:53.799422   20719 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 16:51:53.799432   20719 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 16:51:53.799453   20719 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 16:51:53.799599   20719 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 16:51:53.799650   20719 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 16:51:53.807722   20719 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0828 16:51:53.807766   20719 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0828 16:51:53.815038   20719 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0828 16:51:53.815079   20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0828 16:51:53.815042   20719 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0828 16:51:53.815188   20719 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0828 16:51:53.815041   20719 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0828 16:51:53.815334   20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0828 16:51:53.826025   20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0828 16:51:53.864328   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4187861228 /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0828 16:51:53.864682   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2546517855 /var/lib/minikube/binaries/v1.31.0/kubectl
	I0828 16:51:53.890782   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1422019770 /var/lib/minikube/binaries/v1.31.0/kubelet
	I0828 16:51:53.955315   20719 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 16:51:53.963124   20719 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0828 16:51:53.963142   20719 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0828 16:51:53.963176   20719 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0828 16:51:53.970109   20719 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0828 16:51:53.970227   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3284635942 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0828 16:51:53.978324   20719 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0828 16:51:53.978340   20719 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0828 16:51:53.978368   20719 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0828 16:51:53.984970   20719 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 16:51:53.985080   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3543767253 /lib/systemd/system/kubelet.service
	I0828 16:51:53.992267   20719 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0828 16:51:53.992383   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3937683150 /var/tmp/minikube/kubeadm.yaml.new
	I0828 16:51:53.999883   20719 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0828 16:51:54.001103   20719 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0828 16:51:54.226551   20719 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0828 16:51:54.240413   20719 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube for IP: 10.138.0.48
	I0828 16:51:54.240434   20719 certs.go:194] generating shared ca certs ...
	I0828 16:51:54.240451   20719 certs.go:226] acquiring lock for ca certs: {Name:mk22a8c2144a7d6964e6b37d9c80e4caab8a8dc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:51:54.240562   20719 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10135/.minikube/ca.key
	I0828 16:51:54.240599   20719 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10135/.minikube/proxy-client-ca.key
	I0828 16:51:54.240608   20719 certs.go:256] generating profile certs ...
	I0828 16:51:54.240652   20719 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/client.key
	I0828 16:51:54.240671   20719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/client.crt with IP's: []
	I0828 16:51:54.547376   20719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/client.crt ...
	I0828 16:51:54.547404   20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/client.crt: {Name:mk8a49ed35fd7d9762460679b6e6a57ffafbaf7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:51:54.547532   20719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/client.key ...
	I0828 16:51:54.547542   20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/client.key: {Name:mkd9775e3f45fbfe9210db2a820d59a821d9e905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:51:54.547599   20719 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0828 16:51:54.547612   20719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0828 16:51:55.027398   20719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0828 16:51:55.027428   20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk15fd5643820644a01d331bdfed69765092ed16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:51:55.027542   20719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0828 16:51:55.027555   20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mk222daf87a02fdad80d6658a8192118ed1581cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:51:55.027604   20719 certs.go:381] copying /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.crt
	I0828 16:51:55.027687   20719 certs.go:385] copying /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.key
	I0828 16:51:55.027771   20719 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.key
	I0828 16:51:55.027788   20719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0828 16:51:55.169228   20719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.crt ...
	I0828 16:51:55.169261   20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.crt: {Name:mk348a585877396cb99763220bc3a49ab69d3498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:51:55.169397   20719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.key ...
	I0828 16:51:55.169408   20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.key: {Name:mk7fd98c26b17b3d7b3c49fdcd98d024fde067c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:51:55.169555   20719 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10135/.minikube/certs/ca-key.pem (1679 bytes)
	I0828 16:51:55.169587   20719 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10135/.minikube/certs/ca.pem (1078 bytes)
	I0828 16:51:55.169610   20719 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10135/.minikube/certs/cert.pem (1123 bytes)
	I0828 16:51:55.169639   20719 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10135/.minikube/certs/key.pem (1675 bytes)
	I0828 16:51:55.170156   20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 16:51:55.170277   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1568032011 /var/lib/minikube/certs/ca.crt
	I0828 16:51:55.178503   20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 16:51:55.178626   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3742733851 /var/lib/minikube/certs/ca.key
	I0828 16:51:55.185909   20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 16:51:55.186022   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1769574544 /var/lib/minikube/certs/proxy-client-ca.crt
	I0828 16:51:55.194410   20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 16:51:55.194516   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2769527872 /var/lib/minikube/certs/proxy-client-ca.key
	I0828 16:51:55.201948   20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0828 16:51:55.202039   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3109992943 /var/lib/minikube/certs/apiserver.crt
	I0828 16:51:55.210079   20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 16:51:55.210176   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2277576273 /var/lib/minikube/certs/apiserver.key
	I0828 16:51:55.217395   20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 16:51:55.217502   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1509678444 /var/lib/minikube/certs/proxy-client.crt
	I0828 16:51:55.225851   20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0828 16:51:55.225945   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1059749772 /var/lib/minikube/certs/proxy-client.key
	I0828 16:51:55.233251   20719 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0828 16:51:55.233265   20719 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:51:55.233290   20719 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:51:55.240690   20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 16:51:55.240801   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3299571709 /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:51:55.249775   20719 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 16:51:55.249882   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2311913383 /var/lib/minikube/kubeconfig
	I0828 16:51:55.257237   20719 exec_runner.go:51] Run: openssl version
	I0828 16:51:55.260053   20719 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 16:51:55.268072   20719 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:51:55.269294   20719 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Aug 28 16:51 /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:51:55.269333   20719 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:51:55.272136   20719 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 16:51:55.279188   20719 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 16:51:55.280278   20719 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 16:51:55.280316   20719 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 16:51:55.280410   20719 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0828 16:51:55.295810   20719 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 16:51:55.304379   20719 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 16:51:55.312252   20719 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0828 16:51:55.331216   20719 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 16:51:55.339132   20719 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 16:51:55.339150   20719 kubeadm.go:157] found existing configuration files:
	
	I0828 16:51:55.339183   20719 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 16:51:55.346872   20719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 16:51:55.346906   20719 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 16:51:55.353498   20719 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 16:51:55.360589   20719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 16:51:55.360622   20719 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 16:51:55.367271   20719 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 16:51:55.374395   20719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 16:51:55.374427   20719 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 16:51:55.381410   20719 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 16:51:55.388202   20719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 16:51:55.388235   20719 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 16:51:55.394891   20719 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 16:51:55.426450   20719 kubeadm.go:310] W0828 16:51:55.426343   21608 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 16:51:55.426915   20719 kubeadm.go:310] W0828 16:51:55.426864   21608 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 16:51:55.428447   20719 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 16:51:55.428501   20719 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 16:51:55.518580   20719 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 16:51:55.518671   20719 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 16:51:55.518683   20719 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 16:51:55.518688   20719 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 16:51:55.528437   20719 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 16:51:55.532115   20719 out.go:235]   - Generating certificates and keys ...
	I0828 16:51:55.532158   20719 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 16:51:55.532169   20719 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 16:51:55.679237   20719 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0828 16:51:55.913438   20719 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0828 16:51:56.044209   20719 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0828 16:51:56.116265   20719 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0828 16:51:56.177109   20719 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0828 16:51:56.177218   20719 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0828 16:51:56.292701   20719 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0828 16:51:56.292786   20719 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0828 16:51:56.414644   20719 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0828 16:51:56.560775   20719 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0828 16:51:56.798743   20719 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0828 16:51:56.798857   20719 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 16:51:56.861023   20719 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 16:51:56.988243   20719 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 16:51:57.079274   20719 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 16:51:57.772259   20719 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 16:51:57.843373   20719 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 16:51:57.843938   20719 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 16:51:57.846128   20719 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 16:51:57.848145   20719 out.go:235]   - Booting up control plane ...
	I0828 16:51:57.848165   20719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 16:51:57.848178   20719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 16:51:57.848655   20719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 16:51:57.875912   20719 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 16:51:57.880006   20719 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 16:51:57.880037   20719 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 16:51:58.111341   20719 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 16:51:58.111364   20719 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 16:51:59.113061   20719 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001687898s
	I0828 16:51:59.113802   20719 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 16:52:03.114469   20719 kubeadm.go:310] [api-check] The API server is healthy after 4.00126966s
	I0828 16:52:03.125573   20719 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 16:52:03.135324   20719 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 16:52:03.150393   20719 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 16:52:03.150415   20719 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 16:52:03.157152   20719 kubeadm.go:310] [bootstrap-token] Using token: xipvp4.cbrr22unyblglu5o
	I0828 16:52:03.158513   20719 out.go:235]   - Configuring RBAC rules ...
	I0828 16:52:03.158544   20719 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 16:52:03.161349   20719 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 16:52:03.166082   20719 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 16:52:03.168253   20719 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 16:52:03.170472   20719 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 16:52:03.173409   20719 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 16:52:03.520867   20719 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 16:52:03.940900   20719 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 16:52:04.519929   20719 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 16:52:04.520734   20719 kubeadm.go:310] 
	I0828 16:52:04.520743   20719 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 16:52:04.520747   20719 kubeadm.go:310] 
	I0828 16:52:04.520752   20719 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 16:52:04.520756   20719 kubeadm.go:310] 
	I0828 16:52:04.520760   20719 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 16:52:04.520765   20719 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 16:52:04.520768   20719 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 16:52:04.520779   20719 kubeadm.go:310] 
	I0828 16:52:04.520783   20719 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 16:52:04.520787   20719 kubeadm.go:310] 
	I0828 16:52:04.520790   20719 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 16:52:04.520793   20719 kubeadm.go:310] 
	I0828 16:52:04.520795   20719 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 16:52:04.520798   20719 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 16:52:04.520800   20719 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 16:52:04.520802   20719 kubeadm.go:310] 
	I0828 16:52:04.520808   20719 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 16:52:04.520811   20719 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 16:52:04.520814   20719 kubeadm.go:310] 
	I0828 16:52:04.520826   20719 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xipvp4.cbrr22unyblglu5o \
	I0828 16:52:04.520831   20719 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb28cbd695d699ba2c523a84ccf8745e98e5bae9700dcd7a322a5ef1a0c7d4c2 \
	I0828 16:52:04.520846   20719 kubeadm.go:310] 	--control-plane 
	I0828 16:52:04.520850   20719 kubeadm.go:310] 
	I0828 16:52:04.520854   20719 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 16:52:04.520858   20719 kubeadm.go:310] 
	I0828 16:52:04.520862   20719 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xipvp4.cbrr22unyblglu5o \
	I0828 16:52:04.520866   20719 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb28cbd695d699ba2c523a84ccf8745e98e5bae9700dcd7a322a5ef1a0c7d4c2 
	I0828 16:52:04.523903   20719 cni.go:84] Creating CNI manager for ""
	I0828 16:52:04.523934   20719 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 16:52:04.525781   20719 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 16:52:04.526837   20719 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0828 16:52:04.537152   20719 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 16:52:04.537284   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2018640280 /etc/cni/net.d/1-k8s.conflist
	I0828 16:52:04.547379   20719 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 16:52:04.547423   20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:04.547469   20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_08_28T16_52_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0828 16:52:04.555796   20719 ops.go:34] apiserver oom_adj: -16
	I0828 16:52:04.610461   20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:05.110749   20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:05.610574   20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:06.110768   20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:06.611271   20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:07.111504   20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:07.611028   20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:08.110773   20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:08.611015   20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:09.110604   20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:09.185268   20719 kubeadm.go:1113] duration metric: took 4.637890861s to wait for elevateKubeSystemPrivileges
	I0828 16:52:09.185308   20719 kubeadm.go:394] duration metric: took 13.904987542s to StartCluster
	I0828 16:52:09.185334   20719 settings.go:142] acquiring lock: {Name:mk01918bb9c900e7329bdae41560e39d087de7c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:09.185397   20719 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10135/kubeconfig
	I0828 16:52:09.185972   20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/kubeconfig: {Name:mk291fc5a30453909f461e99fae52dabc9fc4c55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:09.186146   20719 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0828 16:52:09.186238   20719 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0828 16:52:09.186417   20719 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 16:52:09.186432   20719 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0828 16:52:09.186423   20719 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0828 16:52:09.186439   20719 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0828 16:52:09.186457   20719 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0828 16:52:09.186468   20719 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0828 16:52:09.186470   20719 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0828 16:52:09.186478   20719 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0828 16:52:09.186487   20719 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0828 16:52:09.186491   20719 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0828 16:52:09.186490   20719 addons.go:69] Setting registry=true in profile "minikube"
	I0828 16:52:09.186505   20719 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0828 16:52:09.186515   20719 addons.go:234] Setting addon registry=true in "minikube"
	I0828 16:52:09.186517   20719 host.go:66] Checking if "minikube" exists ...
	I0828 16:52:09.186523   20719 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0828 16:52:09.186544   20719 host.go:66] Checking if "minikube" exists ...
	I0828 16:52:09.186548   20719 host.go:66] Checking if "minikube" exists ...
	I0828 16:52:09.187044   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.187061   20719 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0828 16:52:09.187073   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.187102   20719 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0828 16:52:09.187119   20719 addons.go:69] Setting volcano=true in profile "minikube"
	I0828 16:52:09.187119   20719 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0828 16:52:09.187132   20719 host.go:66] Checking if "minikube" exists ...
	I0828 16:52:09.187142   20719 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0828 16:52:09.187144   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.187160   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.186472   20719 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0828 16:52:09.187198   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:09.186459   20719 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0828 16:52:09.186482   20719 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0828 16:52:09.187328   20719 host.go:66] Checking if "minikube" exists ...
	I0828 16:52:09.187339   20719 host.go:66] Checking if "minikube" exists ...
	I0828 16:52:09.187165   20719 host.go:66] Checking if "minikube" exists ...
	I0828 16:52:09.187142   20719 addons.go:234] Setting addon volcano=true in "minikube"
	I0828 16:52:09.187549   20719 host.go:66] Checking if "minikube" exists ...
	I0828 16:52:09.187746   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.187104   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.187759   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.187769   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.187790   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:09.187804   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:09.187922   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.187929   20719 out.go:177] * Configuring local host environment ...
	I0828 16:52:09.187934   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.187046   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.187934   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.188034   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.188065   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:09.188068   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:09.187182   20719 mustload.go:65] Loading cluster: minikube
	I0828 16:52:09.188161   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.188172   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.188198   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:09.188266   20719 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 16:52:09.187944   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.188323   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:09.186421   20719 addons.go:69] Setting yakd=true in profile "minikube"
	I0828 16:52:09.188523   20719 addons.go:234] Setting addon yakd=true in "minikube"
	I0828 16:52:09.188552   20719 host.go:66] Checking if "minikube" exists ...
	I0828 16:52:09.188729   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.188751   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.188780   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:09.189129   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.189152   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.189181   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:09.186481   20719 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0828 16:52:09.189738   20719 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0828 16:52:09.189778   20719 host.go:66] Checking if "minikube" exists ...
	I0828 16:52:09.190407   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.190433   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.190461   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0828 16:52:09.190731   20719 out.go:270] * 
	I0828 16:52:09.187109   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:09.187144   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.187940   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.186464   20719 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	W0828 16:52:09.192011   20719 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	I0828 16:52:09.192019   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.192029   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.192057   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:09.192061   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0828 16:52:09.192020   20719 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0828 16:52:09.192324   20719 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0828 16:52:09.192335   20719 out.go:270] * 
	I0828 16:52:09.192020   20719 host.go:66] Checking if "minikube" exists ...
	W0828 16:52:09.192419   20719 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0828 16:52:09.192433   20719 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0828 16:52:09.192440   20719 out.go:270] * 
	W0828 16:52:09.192467   20719 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0828 16:52:09.192480   20719 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0828 16:52:09.192493   20719 out.go:270] * 
	W0828 16:52:09.192503   20719 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0828 16:52:09.192532   20719 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 16:52:09.192994   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.193011   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.193034   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:09.194174   20719 out.go:177] * Verifying Kubernetes components...
	I0828 16:52:09.198297   20719 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0828 16:52:09.210135   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.213218   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.214472   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.215938   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.228945   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.229182   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.229647   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.230037   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.230527   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.236667   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.236781   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.242923   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.244332   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.244737   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.246966   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.247019   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.247893   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.247943   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.248967   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.252024   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.252081   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.258798   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.258890   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.263203   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.263257   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.267641   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.267711   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.267960   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.268600   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.268623   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.271791   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.271887   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.272959   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.273008   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.274117   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.274168   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.275334   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.276567   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.276625   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.278332   20719 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0828 16:52:09.279618   20719 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0828 16:52:09.279648   20719 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0828 16:52:09.279982   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1216619049 /etc/kubernetes/addons/yakd-ns.yaml
	I0828 16:52:09.280194   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.280216   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.287889   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.287911   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.287892   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.288296   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.289198   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.290657   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.290678   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.291831   20719 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0828 16:52:09.292170   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.292188   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.292412   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.292429   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.292889   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.292933   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.294316   20719 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0828 16:52:09.295105   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.295121   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.296930   20719 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0828 16:52:09.297044   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.298449   20719 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0828 16:52:09.298465   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.298474   20719 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0828 16:52:09.298590   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1323458966 /etc/kubernetes/addons/yakd-sa.yaml
	I0828 16:52:09.299384   20719 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0828 16:52:09.299417   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0828 16:52:09.299941   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3148500674 /etc/kubernetes/addons/volcano-deployment.yaml
	I0828 16:52:09.300111   20719 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0828 16:52:09.300938   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.300959   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.301179   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.301190   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.302779   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.303920   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.303943   20719 host.go:66] Checking if "minikube" exists ...
	I0828 16:52:09.304806   20719 out.go:177]   - Using image docker.io/registry:2.8.3
	I0828 16:52:09.304935   20719 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 16:52:09.304956   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0828 16:52:09.307197   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube577144511 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 16:52:09.307395   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.307452   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.308108   20719 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0828 16:52:09.308209   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.309346   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.309972   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.309916   20719 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0828 16:52:09.310357   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0828 16:52:09.310832   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.311468   20719 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0828 16:52:09.317201   20719 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0828 16:52:09.317436   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0828 16:52:09.317798   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube241128223 /etc/kubernetes/addons/registry-rc.yaml
	I0828 16:52:09.320137   20719 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0828 16:52:09.320615   20719 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0828 16:52:09.321157   20719 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0828 16:52:09.321492   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1528453054 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0828 16:52:09.321697   20719 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0828 16:52:09.321720   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0828 16:52:09.322353   20719 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0828 16:52:09.322386   20719 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0828 16:52:09.322506   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2230331267 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0828 16:52:09.324305   20719 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0828 16:52:09.325623   20719 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0828 16:52:09.325656   20719 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0828 16:52:09.325661   20719 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0828 16:52:09.326945   20719 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0828 16:52:09.327070   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube384429 /etc/kubernetes/addons/yakd-crb.yaml
	I0828 16:52:09.327370   20719 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0828 16:52:09.327401   20719 host.go:66] Checking if "minikube" exists ...
	I0828 16:52:09.326964   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3346000782 /etc/kubernetes/addons/deployment.yaml
	I0828 16:52:09.328807   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.328827   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.328859   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:09.330111   20719 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0828 16:52:09.331402   20719 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0828 16:52:09.332574   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.332593   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.334100   20719 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0828 16:52:09.336265   20719 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0828 16:52:09.337872   20719 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0828 16:52:09.337912   20719 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0828 16:52:09.338832   20719 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0828 16:52:09.338855   20719 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0828 16:52:09.344564   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.344598   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.349463   20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 16:52:09.349624   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.349637   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.349958   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.350283   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.350339   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.351078   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3454000902 /etc/kubernetes/addons/registry-svc.yaml
	I0828 16:52:09.351527   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube816012186 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0828 16:52:09.351596   20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0828 16:52:09.351961   20719 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0828 16:52:09.353262   20719 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0828 16:52:09.353294   20719 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0828 16:52:09.355415   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1153670770 /etc/kubernetes/addons/ig-namespace.yaml
	I0828 16:52:09.357159   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.358230   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.359763   20719 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 16:52:09.360920   20719 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 16:52:09.360938   20719 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0828 16:52:09.360945   20719 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 16:52:09.360982   20719 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 16:52:09.367824   20719 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0828 16:52:09.367840   20719 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0828 16:52:09.367847   20719 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0828 16:52:09.367862   20719 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0828 16:52:09.367917   20719 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0828 16:52:09.367933   20719 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0828 16:52:09.367979   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3348239087 /etc/kubernetes/addons/yakd-svc.yaml
	I0828 16:52:09.368047   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1789194110 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0828 16:52:09.369972   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.371120   20719 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0828 16:52:09.371158   20719 host.go:66] Checking if "minikube" exists ...
	I0828 16:52:09.371872   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:09.371891   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:09.371924   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:09.372182   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3211966672 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0828 16:52:09.374379   20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0828 16:52:09.376872   20719 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0828 16:52:09.376903   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0828 16:52:09.377024   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3449337573 /etc/kubernetes/addons/registry-proxy.yaml
	I0828 16:52:09.384168   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.384308   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.384657   20719 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0828 16:52:09.384716   20719 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0828 16:52:09.384733   20719 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0828 16:52:09.384804   20719 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0828 16:52:09.385347   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3756036716 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0828 16:52:09.385973   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2512930758 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0828 16:52:09.386523   20719 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0828 16:52:09.390166   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 16:52:09.390403   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3066196383 /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 16:52:09.391884   20719 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0828 16:52:09.391914   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0828 16:52:09.392017   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3448776941 /etc/kubernetes/addons/yakd-dp.yaml
	I0828 16:52:09.393082   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.396035   20719 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0828 16:52:09.396060   20719 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0828 16:52:09.396167   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube445499146 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0828 16:52:09.399052   20719 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0828 16:52:09.399081   20719 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0828 16:52:09.399191   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2707607647 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0828 16:52:09.400736   20719 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0828 16:52:09.403433   20719 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 16:52:09.403462   20719 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 16:52:09.403599   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube495669776 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 16:52:09.417129   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.417199   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.420638   20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0828 16:52:09.421373   20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0828 16:52:09.424394   20719 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 16:52:09.424433   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0828 16:52:09.424565   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1570379995 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 16:52:09.425548   20719 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0828 16:52:09.425583   20719 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0828 16:52:09.425621   20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0828 16:52:09.425708   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube533332411 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0828 16:52:09.439110   20719 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0828 16:52:09.439141   20719 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0828 16:52:09.439262   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3612433538 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0828 16:52:09.452424   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:09.452833   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.452877   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.459542   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.459586   20719 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 16:52:09.459602   20719 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0828 16:52:09.459609   20719 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0828 16:52:09.459649   20719 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0828 16:52:09.463986   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:09.464047   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:09.475775   20719 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0828 16:52:09.475814   20719 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0828 16:52:09.475948   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3213713111 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0828 16:52:09.483530   20719 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 16:52:09.483562   20719 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 16:52:09.483691   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4257002272 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 16:52:09.492568   20719 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0828 16:52:09.492600   20719 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0828 16:52:09.492724   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3073608350 /etc/kubernetes/addons/ig-role.yaml
	I0828 16:52:09.507004   20719 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0828 16:52:09.507071   20719 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0828 16:52:09.507210   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2465777229 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0828 16:52:09.513210   20719 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 16:52:09.513364   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3501012253 /etc/kubernetes/addons/storageclass.yaml
	I0828 16:52:09.516146   20719 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0828 16:52:09.516177   20719 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0828 16:52:09.516301   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3461712430 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0828 16:52:09.517690   20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 16:52:09.520352   20719 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0828 16:52:09.520381   20719 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0828 16:52:09.520490   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube217431899 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0828 16:52:09.533071   20719 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 16:52:09.533099   20719 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 16:52:09.533212   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1874314693 /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 16:52:09.537660   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:09.537690   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:09.543714   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:09.552281   20719 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0828 16:52:09.552320   20719 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0828 16:52:09.552447   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube704300080 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0828 16:52:09.552584   20719 out.go:177]   - Using image docker.io/busybox:stable
	I0828 16:52:09.554395   20719 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0828 16:52:09.555581   20719 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0828 16:52:09.555606   20719 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0828 16:52:09.555738   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1381229224 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0828 16:52:09.555783   20719 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 16:52:09.555805   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0828 16:52:09.555918   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4280683204 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 16:52:09.575978   20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 16:52:09.606465   20719 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:52:09.606499   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0828 16:52:09.606619   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2413234350 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:52:09.606702   20719 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0828 16:52:09.628692   20719 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0828 16:52:09.628729   20719 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0828 16:52:09.628865   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube236456223 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0828 16:52:09.645282   20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:52:09.651916   20719 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0828 16:52:09.651947   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0828 16:52:09.652066   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1805233403 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0828 16:52:09.659638   20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 16:52:09.659813   20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 16:52:09.680876   20719 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0828 16:52:09.680914   20719 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0828 16:52:09.682272   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube342720222 /etc/kubernetes/addons/ig-crd.yaml
	I0828 16:52:09.712408   20719 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0828 16:52:09.716703   20719 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0828 16:52:09.716732   20719 node_ready.go:38] duration metric: took 4.292467ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0828 16:52:09.716745   20719 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 16:52:09.726369   20719 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6tkq4" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:09.738070   20719 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 16:52:09.738105   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0828 16:52:09.738224   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2250346700 /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 16:52:09.742704   20719 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0828 16:52:09.742730   20719 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0828 16:52:09.742846   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4233661258 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0828 16:52:09.766247   20719 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0828 16:52:09.766286   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0828 16:52:09.766547   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1346394353 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0828 16:52:09.782972   20719 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0828 16:52:09.783005   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0828 16:52:09.783133   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2498888286 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0828 16:52:09.827335   20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 16:52:09.887471   20719 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 16:52:09.887508   20719 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0828 16:52:09.887637   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1193548962 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 16:52:09.907610   20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 16:52:10.058918   20719 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0828 16:52:10.296332   20719 addons.go:475] Verifying addon registry=true in "minikube"
	I0828 16:52:10.310014   20719 out.go:177] * Verifying registry addon...
	I0828 16:52:10.312660   20719 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0828 16:52:10.316102   20719 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0828 16:52:10.316121   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:10.482217   20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.056393969s)
	I0828 16:52:10.577420   20719 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0828 16:52:10.588055   20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.167370011s)
	I0828 16:52:10.590673   20719 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0828 16:52:10.643668   20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.125935758s)
	I0828 16:52:10.834043   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:10.906535   20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.330508284s)
	I0828 16:52:10.906572   20719 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0828 16:52:10.964691   20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.13730518s)
	I0828 16:52:11.116699   20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.457018181s)
	I0828 16:52:11.319319   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:11.416752   20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.771404749s)
	W0828 16:52:11.416796   20719 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 16:52:11.416824   20719 retry.go:31] will retry after 203.233605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 16:52:11.621057   20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:52:11.733790   20719 pod_ready.go:103] pod "coredns-6f6b679f8f-6tkq4" in "kube-system" namespace has status "Ready":"False"
	I0828 16:52:11.816393   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:12.320028   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:12.391749   20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.017333384s)
	I0828 16:52:12.592540   20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.684878592s)
	I0828 16:52:12.592578   20719 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0828 16:52:12.598971   20719 out.go:177] * Verifying csi-hostpath-driver addon...
	I0828 16:52:12.602059   20719 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0828 16:52:12.606135   20719 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0828 16:52:12.606160   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:12.818824   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:13.106438   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:13.316732   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:13.607323   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:13.817106   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:14.106393   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:14.231423   20719 pod_ready.go:103] pod "coredns-6f6b679f8f-6tkq4" in "kube-system" namespace has status "Ready":"False"
	I0828 16:52:14.316214   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:14.416509   20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.795372342s)
	I0828 16:52:14.607063   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:14.816252   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:15.107721   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:15.316647   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:15.606764   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:15.816503   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:16.107789   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:16.232593   20719 pod_ready.go:103] pod "coredns-6f6b679f8f-6tkq4" in "kube-system" namespace has status "Ready":"False"
	I0828 16:52:16.316924   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:16.326272   20719 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0828 16:52:16.326425   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1705106344 /var/lib/minikube/google_application_credentials.json
	I0828 16:52:16.338275   20719 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0828 16:52:16.338419   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3813997986 /var/lib/minikube/google_cloud_project
	I0828 16:52:16.349902   20719 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0828 16:52:16.349959   20719 host.go:66] Checking if "minikube" exists ...
	I0828 16:52:16.350443   20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0828 16:52:16.350462   20719 api_server.go:166] Checking apiserver status ...
	I0828 16:52:16.350496   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:16.366772   20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
	I0828 16:52:16.376050   20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
	I0828 16:52:16.376100   20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
	I0828 16:52:16.388279   20719 api_server.go:204] freezer state: "THAWED"
	I0828 16:52:16.388304   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:16.392800   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:16.392858   20719 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0828 16:52:16.395607   20719 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 16:52:16.397140   20719 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0828 16:52:16.398393   20719 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0828 16:52:16.398439   20719 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0828 16:52:16.398575   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2661921807 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0828 16:52:16.410235   20719 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0828 16:52:16.410265   20719 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0828 16:52:16.410395   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3492112993 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0828 16:52:16.420861   20719 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 16:52:16.420891   20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0828 16:52:16.421004   20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1888726884 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 16:52:16.430238   20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 16:52:16.606902   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:16.817307   20719 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0828 16:52:16.818806   20719 out.go:177] * Verifying gcp-auth addon...
	I0828 16:52:16.819469   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:16.820964   20719 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0828 16:52:16.947834   20719 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0828 16:52:17.155408   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:17.233047   20719 pod_ready.go:93] pod "coredns-6f6b679f8f-6tkq4" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:17.233069   20719 pod_ready.go:82] duration metric: took 7.506622225s for pod "coredns-6f6b679f8f-6tkq4" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:17.233078   20719 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-v9dmm" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:17.317170   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:17.668726   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:17.817052   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:18.107087   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:18.235604   20719 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-v9dmm" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-v9dmm" not found
	I0828 16:52:18.235631   20719 pod_ready.go:82] duration metric: took 1.002545512s for pod "coredns-6f6b679f8f-v9dmm" in "kube-system" namespace to be "Ready" ...
	E0828 16:52:18.235644   20719 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-v9dmm" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-v9dmm" not found
	I0828 16:52:18.235653   20719 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:18.240520   20719 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:18.240542   20719 pod_ready.go:82] duration metric: took 4.880246ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:18.240555   20719 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:18.244728   20719 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:18.244745   20719 pod_ready.go:82] duration metric: took 4.182863ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:18.244755   20719 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:18.248907   20719 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:18.248926   20719 pod_ready.go:82] duration metric: took 4.163274ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:18.248938   20719 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gpxwg" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:18.252742   20719 pod_ready.go:93] pod "kube-proxy-gpxwg" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:18.252758   20719 pod_ready.go:82] duration metric: took 3.813427ms for pod "kube-proxy-gpxwg" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:18.252768   20719 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:18.316862   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:18.606950   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:18.630508   20719 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:18.630527   20719 pod_ready.go:82] duration metric: took 377.75139ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:18.630535   20719 pod_ready.go:39] duration metric: took 8.913776771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 16:52:18.630551   20719 api_server.go:52] waiting for apiserver process to appear ...
	I0828 16:52:18.630602   20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:52:18.647546   20719 api_server.go:72] duration metric: took 9.454981109s to wait for apiserver process to appear ...
	I0828 16:52:18.647567   20719 api_server.go:88] waiting for apiserver healthz status ...
	I0828 16:52:18.647581   20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0828 16:52:18.650853   20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0828 16:52:18.651658   20719 api_server.go:141] control plane version: v1.31.0
	I0828 16:52:18.651680   20719 api_server.go:131] duration metric: took 4.106926ms to wait for apiserver health ...
	I0828 16:52:18.651689   20719 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 16:52:18.816251   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:18.836125   20719 system_pods.go:59] 17 kube-system pods found
	I0828 16:52:18.836170   20719 system_pods.go:61] "coredns-6f6b679f8f-6tkq4" [9f4822de-7704-4673-ae47-3d95f1d6b20d] Running
	I0828 16:52:18.836183   20719 system_pods.go:61] "csi-hostpath-attacher-0" [b57205bd-bf4a-4031-acf5-a3454e49b197] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 16:52:18.836194   20719 system_pods.go:61] "csi-hostpath-resizer-0" [5f81070d-2097-4b29-847b-92924f3565c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0828 16:52:18.836207   20719 system_pods.go:61] "csi-hostpathplugin-lq5tq" [fbda2aa4-f9ea-486f-bc10-50dfed11eece] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 16:52:18.836218   20719 system_pods.go:61] "etcd-ubuntu-20-agent-2" [470651cd-1ec3-4c7d-b39e-468343f4dffa] Running
	I0828 16:52:18.836227   20719 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [81f66e5e-231f-4c7d-88da-f1fa1de4f183] Running
	I0828 16:52:18.836237   20719 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [dfdedc5e-3693-4bac-8eab-0b735fa89664] Running
	I0828 16:52:18.836244   20719 system_pods.go:61] "kube-proxy-gpxwg" [64d274a0-dfd5-4a29-98b2-3ef060017ab5] Running
	I0828 16:52:18.836250   20719 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [9093644a-c893-4630-a115-6b986c987a97] Running
	I0828 16:52:18.836265   20719 system_pods.go:61] "metrics-server-84c5f94fbc-2ngms" [8c24b175-53cd-473a-99e4-0391de9f4873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 16:52:18.836290   20719 system_pods.go:61] "nvidia-device-plugin-daemonset-vb2jx" [c93e7df8-d7eb-4c47-9006-ba23929faefa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0828 16:52:18.836305   20719 system_pods.go:61] "registry-6fb4cdfc84-rtp2b" [2c727f7b-0cf9-4843-a060-78e13883fe27] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0828 16:52:18.836316   20719 system_pods.go:61] "registry-proxy-v7gq6" [4d2021b8-55cb-4260-88fc-edac5c2173d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0828 16:52:18.836330   20719 system_pods.go:61] "snapshot-controller-56fcc65765-js985" [f9e3a59b-f20f-4453-9425-e8b795573e49] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:52:18.836345   20719 system_pods.go:61] "snapshot-controller-56fcc65765-p4gqz" [8d57f713-7241-406b-a9c5-21a6542c01ac] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:52:18.836354   20719 system_pods.go:61] "storage-provisioner" [01bf7c20-346b-44e8-8daf-8e2532f55d91] Running
	I0828 16:52:18.836368   20719 system_pods.go:61] "tiller-deploy-b48cc5f79-cf7dp" [82d95935-34cf-42d7-9210-1f3bd5b0ff29] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0828 16:52:18.836382   20719 system_pods.go:74] duration metric: took 184.685113ms to wait for pod list to return data ...
	I0828 16:52:18.836398   20719 default_sa.go:34] waiting for default service account to be created ...
	I0828 16:52:19.030365   20719 default_sa.go:45] found service account: "default"
	I0828 16:52:19.030387   20719 default_sa.go:55] duration metric: took 193.979614ms for default service account to be created ...
	I0828 16:52:19.030396   20719 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 16:52:19.106206   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:19.234210   20719 system_pods.go:86] 17 kube-system pods found
	I0828 16:52:19.234237   20719 system_pods.go:89] "coredns-6f6b679f8f-6tkq4" [9f4822de-7704-4673-ae47-3d95f1d6b20d] Running
	I0828 16:52:19.234244   20719 system_pods.go:89] "csi-hostpath-attacher-0" [b57205bd-bf4a-4031-acf5-a3454e49b197] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 16:52:19.234251   20719 system_pods.go:89] "csi-hostpath-resizer-0" [5f81070d-2097-4b29-847b-92924f3565c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0828 16:52:19.234258   20719 system_pods.go:89] "csi-hostpathplugin-lq5tq" [fbda2aa4-f9ea-486f-bc10-50dfed11eece] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 16:52:19.234262   20719 system_pods.go:89] "etcd-ubuntu-20-agent-2" [470651cd-1ec3-4c7d-b39e-468343f4dffa] Running
	I0828 16:52:19.234266   20719 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [81f66e5e-231f-4c7d-88da-f1fa1de4f183] Running
	I0828 16:52:19.234270   20719 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [dfdedc5e-3693-4bac-8eab-0b735fa89664] Running
	I0828 16:52:19.234274   20719 system_pods.go:89] "kube-proxy-gpxwg" [64d274a0-dfd5-4a29-98b2-3ef060017ab5] Running
	I0828 16:52:19.234277   20719 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [9093644a-c893-4630-a115-6b986c987a97] Running
	I0828 16:52:19.234284   20719 system_pods.go:89] "metrics-server-84c5f94fbc-2ngms" [8c24b175-53cd-473a-99e4-0391de9f4873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 16:52:19.234289   20719 system_pods.go:89] "nvidia-device-plugin-daemonset-vb2jx" [c93e7df8-d7eb-4c47-9006-ba23929faefa] Running
	I0828 16:52:19.234294   20719 system_pods.go:89] "registry-6fb4cdfc84-rtp2b" [2c727f7b-0cf9-4843-a060-78e13883fe27] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0828 16:52:19.234302   20719 system_pods.go:89] "registry-proxy-v7gq6" [4d2021b8-55cb-4260-88fc-edac5c2173d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0828 16:52:19.234307   20719 system_pods.go:89] "snapshot-controller-56fcc65765-js985" [f9e3a59b-f20f-4453-9425-e8b795573e49] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:52:19.234317   20719 system_pods.go:89] "snapshot-controller-56fcc65765-p4gqz" [8d57f713-7241-406b-a9c5-21a6542c01ac] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:52:19.234320   20719 system_pods.go:89] "storage-provisioner" [01bf7c20-346b-44e8-8daf-8e2532f55d91] Running
	I0828 16:52:19.234325   20719 system_pods.go:89] "tiller-deploy-b48cc5f79-cf7dp" [82d95935-34cf-42d7-9210-1f3bd5b0ff29] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0828 16:52:19.234333   20719 system_pods.go:126] duration metric: took 203.930569ms to wait for k8s-apps to be running ...
	I0828 16:52:19.234341   20719 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 16:52:19.234381   20719 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0828 16:52:19.247940   20719 system_svc.go:56] duration metric: took 13.592661ms WaitForService to wait for kubelet
	I0828 16:52:19.247967   20719 kubeadm.go:582] duration metric: took 10.055404456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 16:52:19.247984   20719 node_conditions.go:102] verifying NodePressure condition ...
	I0828 16:52:19.315859   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:19.429973   20719 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0828 16:52:19.429995   20719 node_conditions.go:123] node cpu capacity is 8
	I0828 16:52:19.430005   20719 node_conditions.go:105] duration metric: took 182.017368ms to run NodePressure ...
	I0828 16:52:19.430019   20719 start.go:241] waiting for startup goroutines ...
	I0828 16:52:19.606909   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:19.816101   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:20.107204   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:20.316241   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:20.605915   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:20.815784   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:21.105886   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:21.316240   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:21.606812   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:21.817046   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:22.106997   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:22.315957   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:22.606595   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:22.816948   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:23.107112   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:23.316320   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:23.606923   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:23.839011   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:24.107068   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:24.315895   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:24.606478   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:24.816863   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:25.106694   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:25.317022   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:25.607322   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:25.816275   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:26.107479   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:26.316704   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:26.606430   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:26.816111   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:27.106376   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:27.316544   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:27.605945   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:27.816976   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:28.106387   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:28.316571   20719 kapi.go:107] duration metric: took 18.003905693s to wait for kubernetes.io/minikube-addons=registry ...
	I0828 16:52:28.605942   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:29.109869   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:29.606158   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:30.106762   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:30.607180   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:31.107067   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:31.607405   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:32.107396   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:32.606033   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:33.106887   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:33.606840   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:34.106544   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:34.606970   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:35.106567   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:35.606551   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:36.107618   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:36.606633   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:37.106814   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:37.606647   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:38.106703   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:38.606514   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:39.106989   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:39.606775   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:40.106082   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:40.606672   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:41.107925   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:41.605986   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:42.106210   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:42.606882   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:43.106834   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:43.606429   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:44.107435   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:44.606289   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:45.107600   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:45.607067   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:46.106946   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:46.606805   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:47.107330   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:47.606388   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:48.107327   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:48.606675   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:49.105959   20719 kapi.go:107] duration metric: took 36.503902657s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0828 16:52:58.323952   20719 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0828 16:52:58.323971   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:52:58.824027   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:52:59.323832   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:52:59.823995   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:00.324470   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:00.824450   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:01.324105   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:01.824094   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:02.324412   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:02.824235   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:03.324359   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:03.824028   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:04.324145   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:04.823859   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:05.324095   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:05.824081   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:06.324034   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:06.823666   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:07.324687   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:07.824527   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:08.324566   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:08.824489   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:09.324636   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:09.824449   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:10.324560   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:10.824549   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:11.324599   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:11.824062   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:12.324220   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:12.824181   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:13.323786   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:13.825107   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:14.325203   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:14.823768   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:15.324404   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:15.824516   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:16.324475   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:16.824216   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:17.323960   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:17.823685   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:18.324553   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:18.824717   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:19.324528   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:19.824532   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:20.325050   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:20.824290   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:21.327090   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:21.824086   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:22.324189   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:22.824127   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:23.324243   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:23.823844   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:24.324027   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:24.823978   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:25.323785   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:25.825049   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:26.324451   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:26.824050   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:27.324126   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:27.824041   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:28.324047   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:28.823826   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:29.324527   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:29.824388   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:30.324894   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:30.824763   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:31.324674   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:31.823963   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:32.324628   20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:32.824509   20719 kapi.go:107] duration metric: took 1m16.003545291s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0828 16:53:32.826258   20719 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0828 16:53:32.827462   20719 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0828 16:53:32.828682   20719 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0828 16:53:32.830032   20719 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, default-storageclass, helm-tiller, yakd, storage-provisioner, metrics-server, inspektor-gadget, storage-provisioner-rancher, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0828 16:53:32.831271   20719 addons.go:510] duration metric: took 1m23.645050177s for enable addons: enabled=[cloud-spanner nvidia-device-plugin default-storageclass helm-tiller yakd storage-provisioner metrics-server inspektor-gadget storage-provisioner-rancher volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0828 16:53:32.831310   20719 start.go:246] waiting for cluster config update ...
	I0828 16:53:32.831327   20719 start.go:255] writing updated cluster config ...
	I0828 16:53:32.831554   20719 exec_runner.go:51] Run: rm -f paused
	I0828 16:53:32.873811   20719 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 16:53:32.875305   20719 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Thu 2024-07-18 18:21:33 UTC, end at Wed 2024-08-28 17:03:24 UTC. --
	Aug 28 16:55:43 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:55:43.320965688Z" level=error msg="stream copy error: reading from a closed fifo"
	Aug 28 16:55:43 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:55:43.320968249Z" level=error msg="stream copy error: reading from a closed fifo"
	Aug 28 16:55:43 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:55:43.322629490Z" level=error msg="Error running exec 6a350433c0c7cefb159817b5b9124a95b640023503c8319c8fbaa84b59b70133 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 28 16:55:43 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:55:43.407538652Z" level=info msg="ignoring event" container=fd4eb1bc477fd0aafeb47027587f6da6afa534d2fcc1f09104d916b555e0e5a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 16:57:10 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:57:10.075877170Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Aug 28 16:57:10 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:57:10.078089823Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Aug 28 16:58:33 ubuntu-20-agent-2 cri-dockerd[21264]: time="2024-08-28T16:58:33Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc"
	Aug 28 16:58:34 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:58:34.269652382Z" level=error msg="stream copy error: reading from a closed fifo"
	Aug 28 16:58:34 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:58:34.269741327Z" level=error msg="stream copy error: reading from a closed fifo"
	Aug 28 16:58:34 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:58:34.271689104Z" level=error msg="Error running exec 5b960aed91d1276e11e71a3e0b45951a3efa6f064c82632e5fa0a8406ac7a74d in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 28 16:58:34 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:58:34.382196877Z" level=info msg="ignoring event" container=24aae337936daaece615cd4278c47021b1b0aada6007c1792cc8cfdb3ea21a58 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:00:01 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:00:01.064854804Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Aug 28 17:00:01 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:00:01.066792558Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Aug 28 17:02:24 ubuntu-20-agent-2 cri-dockerd[21264]: time="2024-08-28T17:02:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2226ebe473ce84ab6edf16972b9c08fbea4c9f7729bb7c565f41f52fc0a6d255/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Aug 28 17:02:24 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:02:24.240603178Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 28 17:02:24 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:02:24.242845844Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 28 17:02:39 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:02:39.069251280Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 28 17:02:39 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:02:39.071331092Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 28 17:03:06 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:03:06.074684140Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 28 17:03:06 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:03:06.076971828Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 28 17:03:23 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:03:23.697195289Z" level=info msg="ignoring event" container=2226ebe473ce84ab6edf16972b9c08fbea4c9f7729bb7c565f41f52fc0a6d255 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:03:23 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:03:23.948157516Z" level=info msg="ignoring event" container=2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:03:24 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:03:24.013245450Z" level=info msg="ignoring event" container=d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:03:24 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:03:24.100011136Z" level=info msg="ignoring event" container=f62c5f3fdf5ad00837d130e8d5d151d85ecb004fa362aa01e3acb023ed6672b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:03:24 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:03:24.173998058Z" level=info msg="ignoring event" container=d3a683036ac72181a27d676661a4158610a9662069da8be88825de0c5f7e411f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	24aae337936da       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc                            4 minutes ago       Exited              gadget                                   6                   0084ee5c49f65       gadget-jptxz
	4287458110d5c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   7e3455bfe3def       gcp-auth-89d5ffd79-qpsmg
	c512fa27a2390       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   44187b93bc6d5       csi-hostpathplugin-lq5tq
	9a9b5a12b4d3a       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   44187b93bc6d5       csi-hostpathplugin-lq5tq
	b111855d244e4       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   44187b93bc6d5       csi-hostpathplugin-lq5tq
	56d6e6a2aecd0       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   44187b93bc6d5       csi-hostpathplugin-lq5tq
	50446845d6f76       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   44187b93bc6d5       csi-hostpathplugin-lq5tq
	4f2c52d1e58d0       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   df9edbafd9bbf       csi-hostpath-resizer-0
	8f01a953eb644       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   44187b93bc6d5       csi-hostpathplugin-lq5tq
	5cd606d0a7974       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   7784e5241d8c9       csi-hostpath-attacher-0
	c2d07d841d7b7       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   2b8c54076cd38       snapshot-controller-56fcc65765-p4gqz
	76499a40cba32       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   4501b8028aee9       snapshot-controller-56fcc65765-js985
	11338d4617c58       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   fdc81c485b9f8       local-path-provisioner-86d989889c-fdj7z
	88b04840669d9       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        10 minutes ago      Running             metrics-server                           0                   205b1690a8175       metrics-server-84c5f94fbc-2ngms
	1a3d42201e4c7       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  10 minutes ago      Running             tiller                                   0                   d86718d960177       tiller-deploy-b48cc5f79-cf7dp
	d8b96a3bb285a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              10 minutes ago      Exited              registry-proxy                           0                   d3a683036ac72       registry-proxy-v7gq6
	68e73845e3371       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        11 minutes ago      Running             yakd                                     0                   6aa7df080dad4       yakd-dashboard-67d98fc6b-gsjcq
	2199e6f58c84a       registry@sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d                                                             11 minutes ago      Exited              registry                                 0                   f62c5f3fdf5ad       registry-6fb4cdfc84-rtp2b
	9c11233fd37e8       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   e57540fc4c013       nvidia-device-plugin-daemonset-vb2jx
	42dac78b42eb0       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               11 minutes ago      Running             cloud-spanner-emulator                   0                   09a5b2321c9f6       cloud-spanner-emulator-769b77f747-tggfz
	ca48c1eeb323c       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   b26e9b74b2fd2       storage-provisioner
	e2e27508ec9ba       cbb01a7bd410d                                                                                                                                11 minutes ago      Running             coredns                                  0                   ccc5b2cbd21f8       coredns-6f6b679f8f-6tkq4
	480f07a6cbb75       ad83b2ca7b09e                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   d779fedd88a8e       kube-proxy-gpxwg
	13f8999b5a71b       1766f54c897f0                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   2e5b7f02db33f       kube-scheduler-ubuntu-20-agent-2
	1bea33e72497f       604f5db92eaa8                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   143308d734ff8       kube-apiserver-ubuntu-20-agent-2
	ca46c5255f106       045733566833c                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   e759839799e97       kube-controller-manager-ubuntu-20-agent-2
	3a3d26b21f836       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   e5c379ea2146c       etcd-ubuntu-20-agent-2
	
	
	==> coredns [e2e27508ec9b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55649 - 16064 "HINFO IN 4598352363640909345.3241484874310641819. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.17111116s
	[INFO] 10.244.0.24:47649 - 7325 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000314958s
	[INFO] 10.244.0.24:34012 - 52853 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00040694s
	[INFO] 10.244.0.24:59787 - 17040 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009985s
	[INFO] 10.244.0.24:59089 - 56641 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000089256s
	[INFO] 10.244.0.24:40636 - 59812 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101528s
	[INFO] 10.244.0.24:52084 - 4582 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000079923s
	[INFO] 10.244.0.24:35527 - 24090 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003561917s
	[INFO] 10.244.0.24:44024 - 37501 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005092398s
	[INFO] 10.244.0.24:45901 - 58319 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003584164s
	[INFO] 10.244.0.24:59493 - 33044 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003848287s
	[INFO] 10.244.0.24:41584 - 59639 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002405873s
	[INFO] 10.244.0.24:41422 - 38207 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00249938s
	[INFO] 10.244.0.24:41013 - 24388 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001218287s
	[INFO] 10.244.0.24:40080 - 48316 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001384498s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T16_52_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 16:52:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:03:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 16:59:11 +0000   Wed, 28 Aug 2024 16:52:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 16:59:11 +0000   Wed, 28 Aug 2024 16:52:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 16:59:11 +0000   Wed, 28 Aug 2024 16:52:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 16:59:11 +0000   Wed, 28 Aug 2024 16:52:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    d1649260-66e1-4b69-b671-0d72f89b8086
	  Kernel Version:             5.15.0-1067-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-769b77f747-tggfz      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-jptxz                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-qpsmg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-6tkq4                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-lq5tq                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-gpxwg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-2ngms              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-vb2jx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-js985         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-p4gqz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 tiller-deploy-b48cc5f79-cf7dp                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-fdj7z      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-gsjcq               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 14 c6 f4 e7 81 08 06
	[  +0.022167] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 ad ce 12 58 29 08 06
	[  +2.606750] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 07 1c 25 dd 91 08 06
	[  +1.645026] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 db 96 ce df dc 08 06
	[  +1.959788] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 5f 00 f0 30 25 08 06
	[  +4.473417] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 87 90 81 92 b7 08 06
	[  +0.184861] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 8f 9f a6 e2 ef 08 06
	[  +0.083137] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 a9 59 00 f1 9b 08 06
	[  +0.134707] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fe f5 a4 1e 8d f3 08 06
	[Aug28 16:53] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 e1 f9 18 7f 0c 08 06
	[  +0.036390] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d 6e 9c 55 c1 08 06
	[ +11.169244] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 66 c1 af 06 bf 4b 08 06
	[  +0.000514] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e ba 09 dd df 68 08 06
	
	
	==> etcd [3a3d26b21f83] <==
	{"level":"info","ts":"2024-08-28T16:52:00.029310Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-28T16:52:00.316395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-28T16:52:00.316445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-28T16:52:00.316472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
	{"level":"info","ts":"2024-08-28T16:52:00.316488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
	{"level":"info","ts":"2024-08-28T16:52:00.316497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-08-28T16:52:00.316509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-08-28T16:52:00.316522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-08-28T16:52:00.317345Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T16:52:00.317812Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-28T16:52:00.317838Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T16:52:00.317850Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T16:52:00.318123Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T16:52:00.318145Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T16:52:00.318174Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T16:52:00.318243Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T16:52:00.318269Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T16:52:00.318912Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T16:52:00.319066Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T16:52:00.319816Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-08-28T16:52:00.320210Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-28T16:52:17.151151Z","caller":"traceutil/trace.go:171","msg":"trace[415500967] transaction","detail":"{read_only:false; response_revision:880; number_of_response:1; }","duration":"112.476197ms","start":"2024-08-28T16:52:17.038655Z","end":"2024-08-28T16:52:17.151132Z","steps":["trace[415500967] 'process raft request'  (duration: 112.349262ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:02:00.462417Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1717}
	{"level":"info","ts":"2024-08-28T17:02:00.485412Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1717,"took":"22.505734ms","hash":1135449533,"current-db-size-bytes":8056832,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":4354048,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-08-28T17:02:00.485454Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1135449533,"revision":1717,"compact-revision":-1}
	
	
	==> gcp-auth [4287458110d5] <==
	2024/08/28 16:53:32 GCP Auth Webhook started!
	2024/08/28 16:53:49 Ready to marshal response ...
	2024/08/28 16:53:49 Ready to write response ...
	2024/08/28 16:53:49 Ready to marshal response ...
	2024/08/28 16:53:49 Ready to write response ...
	2024/08/28 16:54:11 Ready to marshal response ...
	2024/08/28 16:54:11 Ready to write response ...
	2024/08/28 16:54:11 Ready to marshal response ...
	2024/08/28 16:54:11 Ready to write response ...
	2024/08/28 16:54:11 Ready to marshal response ...
	2024/08/28 16:54:11 Ready to write response ...
	2024/08/28 17:02:23 Ready to marshal response ...
	2024/08/28 17:02:23 Ready to write response ...
	
	
	==> kernel <==
	 17:03:24 up 45 min,  0 users,  load average: 0.40, 0.39, 0.34
	Linux ubuntu-20-agent-2 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [1bea33e72497] <==
	W0828 16:52:51.721286       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.161.172:443: connect: connection refused
	W0828 16:52:57.826527       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.8.25:443: connect: connection refused
	E0828 16:52:57.826559       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.8.25:443: connect: connection refused" logger="UnhandledError"
	W0828 16:53:19.838645       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.8.25:443: connect: connection refused
	E0828 16:53:19.838685       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.8.25:443: connect: connection refused" logger="UnhandledError"
	W0828 16:53:19.844914       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.8.25:443: connect: connection refused
	E0828 16:53:19.844956       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.8.25:443: connect: connection refused" logger="UnhandledError"
	I0828 16:53:49.130346       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0828 16:53:49.147067       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0828 16:54:01.494193       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0828 16:54:01.505728       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0828 16:54:01.604947       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0828 16:54:01.628897       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0828 16:54:01.628983       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0828 16:54:01.634019       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0828 16:54:01.785177       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0828 16:54:01.810938       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0828 16:54:01.858342       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0828 16:54:02.644689       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0828 16:54:02.662809       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0828 16:54:02.772294       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0828 16:54:02.803158       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0828 16:54:02.803156       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0828 16:54:02.858777       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0828 16:54:03.016745       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [ca46c5255f10] <==
	W0828 17:02:13.328707       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:02:13.328758       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:02:15.963754       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:02:15.963795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:02:16.229333       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:02:16.229373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:02:18.979569       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:02:18.979611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:02:19.287768       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:02:19.287809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:02:47.867355       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:02:47.867396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:02:49.304487       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:02:49.304526       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:02:54.937482       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:02:54.937530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:02:55.088603       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:02:55.088648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:02:55.342205       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:02:55.342246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:03:00.746363       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:03:00.746405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:03:12.393065       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:03:12.393106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0828 17:03:23.914485       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="8.259µs"
	
	
	==> kube-proxy [480f07a6cbb7] <==
	I0828 16:52:09.781309       1 server_linux.go:66] "Using iptables proxy"
	I0828 16:52:10.060618       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0828 16:52:10.060707       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 16:52:10.233590       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0828 16:52:10.233657       1 server_linux.go:169] "Using iptables Proxier"
	I0828 16:52:10.237216       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 16:52:10.237573       1 server.go:483] "Version info" version="v1.31.0"
	I0828 16:52:10.237595       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 16:52:10.241197       1 config.go:197] "Starting service config controller"
	I0828 16:52:10.241218       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 16:52:10.241238       1 config.go:104] "Starting endpoint slice config controller"
	I0828 16:52:10.241243       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 16:52:10.241789       1 config.go:326] "Starting node config controller"
	I0828 16:52:10.241886       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 16:52:10.341279       1 shared_informer.go:320] Caches are synced for service config
	I0828 16:52:10.341379       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 16:52:10.341964       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [13f8999b5a71] <==
	W0828 16:52:01.314154       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0828 16:52:01.314112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 16:52:01.314180       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0828 16:52:01.314170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:01.314193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0828 16:52:01.314185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0828 16:52:01.314219       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0828 16:52:01.314153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:02.244775       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0828 16:52:02.244816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:02.259148       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0828 16:52:02.259176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:02.277435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0828 16:52:02.277477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:02.298769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0828 16:52:02.298808       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:02.315643       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0828 16:52:02.315686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:02.325958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0828 16:52:02.325996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:02.493659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0828 16:52:02.493698       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:02.583650       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 16:52:02.583688       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0828 16:52:04.610198       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Thu 2024-07-18 18:21:33 UTC, end at Wed 2024-08-28 17:03:24 UTC. --
	Aug 28 17:03:11 ubuntu-20-agent-2 kubelet[22182]: E0828 17:03:11.926646   22182 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="119c62ae-02ac-4215-aba8-bea987228018"
	Aug 28 17:03:17 ubuntu-20-agent-2 kubelet[22182]: E0828 17:03:17.926937   22182 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="b503f647-1d8d-481b-877b-c5c54812988d"
	Aug 28 17:03:22 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:22.924546   22182 scope.go:117] "RemoveContainer" containerID="24aae337936daaece615cd4278c47021b1b0aada6007c1792cc8cfdb3ea21a58"
	Aug 28 17:03:22 ubuntu-20-agent-2 kubelet[22182]: E0828 17:03:22.924803   22182 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-jptxz_gadget(7a692f95-8588-4031-aad2-f492771c07d4)\"" pod="gadget/gadget-jptxz" podUID="7a692f95-8588-4031-aad2-f492771c07d4"
	Aug 28 17:03:23 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:23.785041   22182 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjdzm\" (UniqueName: \"kubernetes.io/projected/b503f647-1d8d-481b-877b-c5c54812988d-kube-api-access-tjdzm\") pod \"b503f647-1d8d-481b-877b-c5c54812988d\" (UID: \"b503f647-1d8d-481b-877b-c5c54812988d\") "
	Aug 28 17:03:23 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:23.785089   22182 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b503f647-1d8d-481b-877b-c5c54812988d-gcp-creds\") pod \"b503f647-1d8d-481b-877b-c5c54812988d\" (UID: \"b503f647-1d8d-481b-877b-c5c54812988d\") "
	Aug 28 17:03:23 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:23.785180   22182 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b503f647-1d8d-481b-877b-c5c54812988d-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b503f647-1d8d-481b-877b-c5c54812988d" (UID: "b503f647-1d8d-481b-877b-c5c54812988d"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 28 17:03:23 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:23.786873   22182 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b503f647-1d8d-481b-877b-c5c54812988d-kube-api-access-tjdzm" (OuterVolumeSpecName: "kube-api-access-tjdzm") pod "b503f647-1d8d-481b-877b-c5c54812988d" (UID: "b503f647-1d8d-481b-877b-c5c54812988d"). InnerVolumeSpecName "kube-api-access-tjdzm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:03:23 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:23.885898   22182 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b503f647-1d8d-481b-877b-c5c54812988d-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Aug 28 17:03:23 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:23.885934   22182 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tjdzm\" (UniqueName: \"kubernetes.io/projected/b503f647-1d8d-481b-877b-c5c54812988d-kube-api-access-tjdzm\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Aug 28 17:03:23 ubuntu-20-agent-2 kubelet[22182]: E0828 17:03:23.927346   22182 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="119c62ae-02ac-4215-aba8-bea987228018"
	Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.187748   22182 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqb5n\" (UniqueName: \"kubernetes.io/projected/2c727f7b-0cf9-4843-a060-78e13883fe27-kube-api-access-nqb5n\") pod \"2c727f7b-0cf9-4843-a060-78e13883fe27\" (UID: \"2c727f7b-0cf9-4843-a060-78e13883fe27\") "
	Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.189991   22182 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c727f7b-0cf9-4843-a060-78e13883fe27-kube-api-access-nqb5n" (OuterVolumeSpecName: "kube-api-access-nqb5n") pod "2c727f7b-0cf9-4843-a060-78e13883fe27" (UID: "2c727f7b-0cf9-4843-a060-78e13883fe27"). InnerVolumeSpecName "kube-api-access-nqb5n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.289005   22182 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsxx2\" (UniqueName: \"kubernetes.io/projected/4d2021b8-55cb-4260-88fc-edac5c2173d8-kube-api-access-vsxx2\") pod \"4d2021b8-55cb-4260-88fc-edac5c2173d8\" (UID: \"4d2021b8-55cb-4260-88fc-edac5c2173d8\") "
	Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.289134   22182 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nqb5n\" (UniqueName: \"kubernetes.io/projected/2c727f7b-0cf9-4843-a060-78e13883fe27-kube-api-access-nqb5n\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.290937   22182 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2021b8-55cb-4260-88fc-edac5c2173d8-kube-api-access-vsxx2" (OuterVolumeSpecName: "kube-api-access-vsxx2") pod "4d2021b8-55cb-4260-88fc-edac5c2173d8" (UID: "4d2021b8-55cb-4260-88fc-edac5c2173d8"). InnerVolumeSpecName "kube-api-access-vsxx2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.390334   22182 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vsxx2\" (UniqueName: \"kubernetes.io/projected/4d2021b8-55cb-4260-88fc-edac5c2173d8-kube-api-access-vsxx2\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.569820   22182 scope.go:117] "RemoveContainer" containerID="d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f"
	Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.592162   22182 scope.go:117] "RemoveContainer" containerID="d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f"
	Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: E0828 17:03:24.593629   22182 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f" containerID="d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f"
	Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.593665   22182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f"} err="failed to get container status \"d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f\": rpc error: code = Unknown desc = Error response from daemon: No such container: d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f"
	Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.593687   22182 scope.go:117] "RemoveContainer" containerID="2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b"
	Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.611164   22182 scope.go:117] "RemoveContainer" containerID="2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b"
	Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: E0828 17:03:24.611985   22182 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b" containerID="2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b"
	Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.612020   22182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b"} err="failed to get container status \"2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b\": rpc error: code = Unknown desc = Error response from daemon: No such container: 2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b"
	
	
	==> storage-provisioner [ca48c1eeb323] <==
	I0828 16:52:11.715889       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 16:52:11.730409       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 16:52:11.730457       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 16:52:11.737380       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 16:52:11.737570       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_7fc1c5d6-de31-409a-b964-a4acd7b6947e!
	I0828 16:52:11.738391       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0eed10bb-c3c4-4843-a38b-667c5bb26c3e", APIVersion:"v1", ResourceVersion:"610", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_7fc1c5d6-de31-409a-b964-a4acd7b6947e became leader
	I0828 16:52:11.837819       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_7fc1c5d6-de31-409a-b964-a4acd7b6947e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-2/10.138.0.48
	Start Time:       Wed, 28 Aug 2024 16:54:11 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m74n5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-m74n5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-2
	  Normal   Pulling    7m48s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m47s (x4 over 9m13s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m47s (x4 over 9m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m34s (x6 over 9m13s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m5s (x21 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.73s)

                                                
                                    

Test pass (111/168)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 2.38
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.1
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 0.97
15 TestDownloadOnly/v1.31.0/binaries 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.1
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.55
22 TestOffline 39.08
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 101.69
29 TestAddons/serial/Volcano 38.4
31 TestAddons/serial/GCPAuth/Namespaces 0.11
35 TestAddons/parallel/InspektorGadget 10.43
36 TestAddons/parallel/MetricsServer 5.35
37 TestAddons/parallel/HelmTiller 9.37
39 TestAddons/parallel/CSI 48.15
40 TestAddons/parallel/Headlamp 15.82
41 TestAddons/parallel/CloudSpanner 6.24
43 TestAddons/parallel/NvidiaDevicePlugin 6.22
44 TestAddons/parallel/Yakd 10.38
45 TestAddons/StoppedEnableDisable 10.72
47 TestCertExpiration 228.6
58 TestFunctional/serial/CopySyncFile 0
59 TestFunctional/serial/StartWithProxy 29.72
60 TestFunctional/serial/AuditLog 0
61 TestFunctional/serial/SoftStart 30.86
62 TestFunctional/serial/KubeContext 0.04
63 TestFunctional/serial/KubectlGetPods 0.07
65 TestFunctional/serial/MinikubeKubectlCmd 0.1
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
67 TestFunctional/serial/ExtraConfig 37.07
68 TestFunctional/serial/ComponentHealth 0.06
69 TestFunctional/serial/LogsCmd 0.79
70 TestFunctional/serial/LogsFileCmd 0.81
71 TestFunctional/serial/InvalidService 4.11
73 TestFunctional/parallel/ConfigCmd 0.25
74 TestFunctional/parallel/DashboardCmd 9.83
75 TestFunctional/parallel/DryRun 0.15
76 TestFunctional/parallel/InternationalLanguage 0.08
77 TestFunctional/parallel/StatusCmd 0.4
80 TestFunctional/parallel/ProfileCmd/profile_not_create 0.22
81 TestFunctional/parallel/ProfileCmd/profile_list 0.21
82 TestFunctional/parallel/ProfileCmd/profile_json_output 0.21
84 TestFunctional/parallel/ServiceCmd/DeployApp 9.13
85 TestFunctional/parallel/ServiceCmd/List 0.33
86 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
87 TestFunctional/parallel/ServiceCmd/HTTPS 0.14
88 TestFunctional/parallel/ServiceCmd/Format 0.14
89 TestFunctional/parallel/ServiceCmd/URL 0.14
90 TestFunctional/parallel/ServiceCmdConnect 6.29
91 TestFunctional/parallel/AddonsCmd 0.11
92 TestFunctional/parallel/PersistentVolumeClaim 22.69
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.26
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.17
99 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
100 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
104 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
107 TestFunctional/parallel/MySQL 22.23
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 13.85
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 12.91
116 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/Version/short 0.04
121 TestFunctional/parallel/Version/components 0.38
122 TestFunctional/parallel/License 0.21
123 TestFunctional/delete_echo-server_images 0.03
124 TestFunctional/delete_my-image_image 0.01
125 TestFunctional/delete_minikube_cached_images 0.01
130 TestImageBuild/serial/Setup 14.27
131 TestImageBuild/serial/NormalBuild 1.65
132 TestImageBuild/serial/BuildWithBuildArg 0.8
133 TestImageBuild/serial/BuildWithDockerIgnore 0.58
134 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.7
138 TestJSONOutput/start/Command 30.47
139 TestJSONOutput/start/Audit 0
141 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
142 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
144 TestJSONOutput/pause/Command 0.51
145 TestJSONOutput/pause/Audit 0
147 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/unpause/Command 0.41
151 TestJSONOutput/unpause/Audit 0
153 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/stop/Command 10.39
157 TestJSONOutput/stop/Audit 0
159 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
161 TestErrorJSONOutput 0.19
166 TestMainNoArgs 0.04
167 TestMinikubeProfile 33.3
175 TestPause/serial/Start 28.13
176 TestPause/serial/SecondStartNoReconfiguration 32.73
177 TestPause/serial/Pause 0.49
178 TestPause/serial/VerifyStatus 0.12
179 TestPause/serial/Unpause 0.41
180 TestPause/serial/PauseAgain 0.53
181 TestPause/serial/DeletePaused 1.65
182 TestPause/serial/VerifyDeletedResources 0.06
196 TestRunningBinaryUpgrade 71.42
198 TestStoppedBinaryUpgrade/Setup 0.51
199 TestStoppedBinaryUpgrade/Upgrade 49.47
200 TestStoppedBinaryUpgrade/MinikubeLogs 0.77
201 TestKubernetesUpgrade 317.39
x
+
TestDownloadOnly/v1.20.0/json-events (2.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (2.374972635s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (2.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (51.440499ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 16:51:07
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 16:51:07.387289   17033 out.go:345] Setting OutFile to fd 1 ...
	I0828 16:51:07.387551   17033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:07.387561   17033 out.go:358] Setting ErrFile to fd 2...
	I0828 16:51:07.387566   17033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:07.387790   17033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10135/.minikube/bin
	W0828 16:51:07.387948   17033 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19529-10135/.minikube/config/config.json: open /home/jenkins/minikube-integration/19529-10135/.minikube/config/config.json: no such file or directory
	I0828 16:51:07.388548   17033 out.go:352] Setting JSON to true
	I0828 16:51:07.389462   17033 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2014,"bootTime":1724861853,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 16:51:07.389518   17033 start.go:139] virtualization: kvm guest
	I0828 16:51:07.391739   17033 out.go:97] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0828 16:51:07.391832   17033 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19529-10135/.minikube/cache/preloaded-tarball: no such file or directory
	I0828 16:51:07.391876   17033 notify.go:220] Checking for updates...
	I0828 16:51:07.393006   17033 out.go:169] MINIKUBE_LOCATION=19529
	I0828 16:51:07.394336   17033 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 16:51:07.395600   17033 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19529-10135/kubeconfig
	I0828 16:51:07.396716   17033 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10135/.minikube
	I0828 16:51:07.397728   17033 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (0.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
--- PASS: TestDownloadOnly/v1.31.0/json-events (0.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
--- PASS: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (57.201955ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 16:51:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 16:51:10.036363   17188 out.go:345] Setting OutFile to fd 1 ...
	I0828 16:51:10.036453   17188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:10.036461   17188 out.go:358] Setting ErrFile to fd 2...
	I0828 16:51:10.036465   17188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:10.036648   17188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10135/.minikube/bin
	I0828 16:51:10.037164   17188 out.go:352] Setting JSON to true
	I0828 16:51:10.037948   17188 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2017,"bootTime":1724861853,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 16:51:10.038001   17188 start.go:139] virtualization: kvm guest
	I0828 16:51:10.040064   17188 out.go:97] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 16:51:10.040181   17188 notify.go:220] Checking for updates...
	W0828 16:51:10.040142   17188 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19529-10135/.minikube/cache/preloaded-tarball: no such file or directory
	I0828 16:51:10.041589   17188 out.go:169] MINIKUBE_LOCATION=19529
	I0828 16:51:10.043003   17188 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 16:51:10.044309   17188 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19529-10135/kubeconfig
	I0828 16:51:10.045485   17188 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10135/.minikube
	I0828 16:51:10.046618   17188 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:35429 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (39.08s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (37.502830934s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.576424467s)
--- PASS: TestOffline (39.08s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (43.37653ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (46.06199ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (101.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller: (1m41.687453436s)
--- PASS: TestAddons/Setup (101.69s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.4s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 8.609882ms
addons_test.go:913: volcano-controller stabilized in 8.673196ms
addons_test.go:905: volcano-admission stabilized in 8.715356ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-f2wzz" [7820f13e-8fd5-4255-bb2d-639e3d2e2b92] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003354495s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-bttnj" [279e02c5-bb4b-49ad-975d-f5b7f5286dc9] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004222573s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-88hq6" [97eac6c3-6f98-44c5-99f8-cbd1143a40d4] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003695088s
addons_test.go:932: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ffbba1fe-9adc-4b2e-abe4-fb143ccc3d8d] Pending
helpers_test.go:344: "test-job-nginx-0" [ffbba1fe-9adc-4b2e-abe4-fb143ccc3d8d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [ffbba1fe-9adc-4b2e-abe4-fb143ccc3d8d] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003854451s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.082553335s)
--- PASS: TestAddons/serial/Volcano (38.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.43s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jptxz" [7a692f95-8588-4031-aad2-f492771c07d4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004507892s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.427714931s)
--- PASS: TestAddons/parallel/InspektorGadget (10.43s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.029329ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-2ngms" [8c24b175-53cd-473a-99e4-0391de9f4873] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003024198s
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.35s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.37s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.803484ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-cf7dp" [82d95935-34cf-42d7-9210-1f3bd5b0ff29] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003324139s
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.083754264s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.15s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.691362ms
addons_test.go:570: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1d37fb1c-9df4-4200-b98b-594db26057e2] Pending
helpers_test.go:344: "task-pv-pod" [1d37fb1c-9df4-4200-b98b-594db26057e2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1d37fb1c-9df4-4200-b98b-594db26057e2] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004230669s
addons_test.go:590: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c614ca82-a35c-4a37-841e-b1a87c0b6d5b] Pending
helpers_test.go:344: "task-pv-pod-restore" [c614ca82-a35c-4a37-841e-b1a87c0b6d5b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c614ca82-a35c-4a37-841e-b1a87c0b6d5b] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.0032301s
addons_test.go:632: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.252162946s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.15s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-qffxc" [4521e9dc-be78-44ae-a46d-ff8bddfa15dc] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-qffxc" [4521e9dc-be78-44ae-a46d-ff8bddfa15dc] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.0036349s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.372976168s)
--- PASS: TestAddons/parallel/Headlamp (15.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-tggfz" [d7a8ea03-a6e9-4f3a-b09a-acc67d74904d] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00356854s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (6.24s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.22s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vb2jx" [c93e7df8-d7eb-4c47-9006-ba23929faefa] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00391364s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.22s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-gsjcq" [d9c5858d-95dd-4ef4-860b-4442b184a502] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003213159s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.374558178s)
--- PASS: TestAddons/parallel/Yakd (10.38s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.72s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.442729598s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.72s)

                                                
                                    
x
+
TestCertExpiration (228.6s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (13.762572657s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (33.239824218s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.594564635s)
--- PASS: TestCertExpiration (228.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19529-10135/.minikube/files/etc/test/nested/copy/17021/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (29.72s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (29.716788475s)
--- PASS: TestFunctional/serial/StartWithProxy (29.72s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.86s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (30.857144863s)
functional_test.go:663: soft start took 30.857886417s for "minikube" cluster.
--- PASS: TestFunctional/serial/SoftStart (30.86s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.06638517s)
functional_test.go:761: restart took 37.066485843s for "minikube" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.07s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd4284151537/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.81s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.11s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (149.97241ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://10.138.0.48:30270 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (41.214541ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (41.172694ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/08/28 17:11:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 52529: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (74.824935ms)

                                                
                                                
-- stdout --
	* minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-10135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:11:11.457284   52903 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:11:11.457532   52903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:11:11.457542   52903 out.go:358] Setting ErrFile to fd 2...
	I0828 17:11:11.457547   52903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:11:11.457733   52903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10135/.minikube/bin
	I0828 17:11:11.458211   52903 out.go:352] Setting JSON to false
	I0828 17:11:11.459110   52903 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3218,"bootTime":1724861853,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 17:11:11.459170   52903 start.go:139] virtualization: kvm guest
	I0828 17:11:11.461447   52903 out.go:177] * minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 17:11:11.462672   52903 notify.go:220] Checking for updates...
	W0828 17:11:11.462659   52903 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19529-10135/.minikube/cache/preloaded-tarball: no such file or directory
	I0828 17:11:11.462701   52903 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 17:11:11.463969   52903 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:11:11.465372   52903 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10135/kubeconfig
	I0828 17:11:11.466619   52903 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10135/.minikube
	I0828 17:11:11.467875   52903 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 17:11:11.469050   52903 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 17:11:11.470591   52903 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 17:11:11.470872   52903 exec_runner.go:51] Run: systemctl --version
	I0828 17:11:11.473366   52903 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:11:11.485092   52903 out.go:177] * Using the none driver based on existing profile
	I0828 17:11:11.486207   52903 start.go:297] selected driver: none
	I0828 17:11:11.486223   52903 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:11:11.486318   52903 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 17:11:11.486344   52903 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0828 17:11:11.486637   52903 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0828 17:11:11.488677   52903 out.go:201] 
	W0828 17:11:11.489856   52903 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0828 17:11:11.490960   52903 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (77.86861ms)

                                                
                                                
-- stdout --
	* minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-10135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:11:11.606142   52932 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:11:11.606263   52932 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:11:11.606272   52932 out.go:358] Setting ErrFile to fd 2...
	I0828 17:11:11.606277   52932 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:11:11.606523   52932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10135/.minikube/bin
	I0828 17:11:11.607001   52932 out.go:352] Setting JSON to false
	I0828 17:11:11.607976   52932 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3219,"bootTime":1724861853,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 17:11:11.608039   52932 start.go:139] virtualization: kvm guest
	I0828 17:11:11.610042   52932 out.go:177] * minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	W0828 17:11:11.611175   52932 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19529-10135/.minikube/cache/preloaded-tarball: no such file or directory
	I0828 17:11:11.611209   52932 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 17:11:11.611226   52932 notify.go:220] Checking for updates...
	I0828 17:11:11.613504   52932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:11:11.614715   52932 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10135/kubeconfig
	I0828 17:11:11.615874   52932 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10135/.minikube
	I0828 17:11:11.616992   52932 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 17:11:11.618161   52932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 17:11:11.619644   52932 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 17:11:11.619947   52932 exec_runner.go:51] Run: systemctl --version
	I0828 17:11:11.622649   52932 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:11:11.634170   52932 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0828 17:11:11.635429   52932 start.go:297] selected driver: none
	I0828 17:11:11.635446   52932 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:11:11.635562   52932 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 17:11:11.635588   52932 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0828 17:11:11.635941   52932 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0828 17:11:11.637896   52932 out.go:201] 
	W0828 17:11:11.638997   52932 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0828 17:11:11.640174   52932 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "165.812282ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "41.951101ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "163.529662ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "41.949805ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-cqqm8" [2ba7cc6e-3efd-4169-815a-385044d0dd89] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-cqqm8" [2ba7cc6e-3efd-4169-815a-385044d0dd89] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003424437s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "323.368164ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.138.0.48:32567
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.138.0.48:32567
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-7gmpn" [d7b88c92-2239-4d46-9a0c-a4ce43d56a72] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-7gmpn" [d7b88c92-2239-4d46-9a0c-a4ce43d56a72] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.003544662s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.138.0.48:31933
functional_test.go:1675: http://10.138.0.48:31933: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-7gmpn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.138.0.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.138.0.48:31933
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.29s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8f1578f4-1d81-4409-9217-cfcfd4885fcf] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003936864s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a8b05637-7959-49ee-b504-53d0d4c0cb9f] Pending
helpers_test.go:344: "sp-pod" [a8b05637-7959-49ee-b504-53d0d4c0cb9f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a8b05637-7959-49ee-b504-53d0d4c0cb9f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003768249s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml: (1.022092801s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [01d9b995-fb1e-43f5-8886-67fa575cd323] Pending
helpers_test.go:344: "sp-pod" [01d9b995-fb1e-43f5-8886-67fa575cd323] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [01d9b995-fb1e-43f5-8886-67fa575cd323] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003398477s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 54659: operation not permitted
helpers_test.go:508: unable to kill pid 54610: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context minikube apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [29d9f679-3393-4322-8064-cc484860ef2b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [29d9f679-3393-4322-8064-cc484860ef2b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003652757s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context minikube get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.50.152 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-wd2b9" [f48f2e41-3848-4c2c-8f60-3060a56f5fc7] Pending
helpers_test.go:344: "mysql-6cdb49bbb-wd2b9" [f48f2e41-3848-4c2c-8f60-3060a56f5fc7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-wd2b9" [f48f2e41-3848-4c2c-8f60-3060a56f5fc7] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.004281327s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-wd2b9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-wd2b9 -- mysql -ppassword -e "show databases;": exit status 1 (111.236962ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-wd2b9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-wd2b9 -- mysql -ppassword -e "show databases;": exit status 1 (104.629484ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-wd2b9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-wd2b9 -- mysql -ppassword -e "show databases;": exit status 1 (104.193698ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-wd2b9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.847933105s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (12.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (12.914526248s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (12.91s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.27s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.265901961s)
--- PASS: TestImageBuild/serial/Setup (14.27s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.651568239s)
--- PASS: TestImageBuild/serial/NormalBuild (1.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.58s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.58s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (30.47s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (30.470541306s)
--- PASS: TestJSONOutput/start/Command (30.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.41s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.41s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.39s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.390176267s)
--- PASS: TestJSONOutput/stop/Command (10.39s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.017183ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1831ebe6-1f49-4d43-808c-7e5088f9f32a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"40e11d1d-f661-4737-b081-dcfcee537e8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19529"}}
	{"specversion":"1.0","id":"0e078273-2f93-49e9-8e1a-db87beeeab08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"70474f9a-6257-485f-8dc7-70d0363a4810","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19529-10135/kubeconfig"}}
	{"specversion":"1.0","id":"1d57b01e-d01f-43a6-8b48-a401d050e89e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10135/.minikube"}}
	{"specversion":"1.0","id":"178143eb-f90a-43db-9836-b3f83b911ec2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a7e5256b-0ca1-410c-99ed-91d2dbc3531e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"57149cf0-3890-4cdc-95a7-9ec6d1992506","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (33.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.557253138s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.908896423s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.263305758s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (33.30s)

                                                
                                    
x
+
TestPause/serial/Start (28.13s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (28.128437686s)
--- PASS: TestPause/serial/Start (28.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (32.73s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (32.725507279s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (32.73s)

                                                
                                    
x
+
TestPause/serial/Pause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.49s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.12s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (121.976029ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.12s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.41s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.41s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.53s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.53s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.65s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.64625659s)
--- PASS: TestPause/serial/DeletePaused (1.65s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (71.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3171252436 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3171252436 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (32.478418872s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (35.022028296s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (2.815555592s)
--- PASS: TestRunningBinaryUpgrade (71.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (49.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1122562544 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1122562544 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.25738301s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1122562544 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1122562544 -p minikube stop: (23.735063479s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (11.476910335s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (49.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (317.39s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (30.915509642s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.327639807s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (72.054806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m16.240201702s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (64.387261ms)

                                                
                                                
-- stdout --
	* minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-10135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.421709839s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.294800102s)
--- PASS: TestKubernetesUpgrade (317.39s)

                                                
                                    

Test skip (56/168)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.0/preload-exists 0
14 TestDownloadOnly/v1.31.0/cached-images 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
38 TestAddons/parallel/Olm 0
42 TestAddons/parallel/LocalPath 0
46 TestCertOptions 0
48 TestDockerFlags 0
49 TestForceSystemdFlag 0
50 TestForceSystemdEnv 0
51 TestDockerEnvContainerd 0
52 TestKVMDriverInstallOrUpdate 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
55 TestErrorSpam 0
64 TestFunctional/serial/CacheCmd 0
78 TestFunctional/parallel/MountCmd 0
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
102 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
103 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
105 TestFunctional/parallel/SSHCmd 0
106 TestFunctional/parallel/CpCmd 0
108 TestFunctional/parallel/FileSync 0
109 TestFunctional/parallel/CertSync 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/ImageCommands 0
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0
126 TestGvisorAddon 0
127 TestMultiControlPlane 0
135 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
162 TestKicCustomNetwork 0
163 TestKicExistingNetwork 0
164 TestKicCustomSubnet 0
165 TestKicStaticIP 0
168 TestMountStart 0
169 TestMultiNode 0
170 TestNetworkPlugins 0
171 TestNoKubernetes 0
172 TestChangeNoneUser 0
183 TestPreload 0
184 TestScheduledStopWindows 0
185 TestScheduledStopUnix 0
186 TestSkaffold 0
189 TestStartStop/group/old-k8s-version 0.12
190 TestStartStop/group/newest-cni 0.12
191 TestStartStop/group/default-k8s-diff-port 0.12
192 TestStartStop/group/no-preload 0.12
193 TestStartStop/group/disable-driver-mounts 0.12
194 TestStartStop/group/embed-certs 0.12
195 TestInsufficientStorage 0
202 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:198: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:978: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.12s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.12s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard