Test Report: none_Linux 19662

                    
                      3f64d3c641e64b460ff7a3cff080aebef74ca5ca:2024-09-17:36258
                    
                

Test fail (1/168)

Order failed test Duration
33 TestAddons/parallel/Registry 71.8
x
+
TestAddons/parallel/Registry (71.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.450439ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-hpl5r" [19701c96-17bb-45cc-97c3-1363596536ce] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003154911s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ntlkg" [0d579daa-daa4-4c2a-b2b7-bdad94bdc8d8] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003359073s
addons_test.go:342: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.081452228s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/17 17:07:52 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:44271               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:56 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 17 Sep 24 16:56 UTC | 17 Sep 24 16:56 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 17 Sep 24 16:56 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 17 Sep 24 16:56 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 17 Sep 24 16:56 UTC | 17 Sep 24 16:58 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 17 Sep 24 16:58 UTC | 17 Sep 24 16:58 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 17 Sep 24 17:07 UTC | 17 Sep 24 17:07 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:56:20
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:56:20.167200   21549 out.go:345] Setting OutFile to fd 1 ...
	I0917 16:56:20.167466   21549 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:56:20.167476   21549 out.go:358] Setting ErrFile to fd 2...
	I0917 16:56:20.167482   21549 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:56:20.167652   21549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-10973/.minikube/bin
	I0917 16:56:20.168286   21549 out.go:352] Setting JSON to false
	I0917 16:56:20.169131   21549 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2325,"bootTime":1726589855,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 16:56:20.169211   21549 start.go:139] virtualization: kvm guest
	I0917 16:56:20.171366   21549 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0917 16:56:20.172501   21549 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19662-10973/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 16:56:20.172513   21549 notify.go:220] Checking for updates...
	I0917 16:56:20.172517   21549 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 16:56:20.173879   21549 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 16:56:20.175082   21549 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-10973/kubeconfig
	I0917 16:56:20.176413   21549 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-10973/.minikube
	I0917 16:56:20.177725   21549 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 16:56:20.179004   21549 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 16:56:20.180486   21549 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 16:56:20.189574   21549 out.go:177] * Using the none driver based on user configuration
	I0917 16:56:20.190733   21549 start.go:297] selected driver: none
	I0917 16:56:20.190746   21549 start.go:901] validating driver "none" against <nil>
	I0917 16:56:20.190767   21549 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 16:56:20.190794   21549 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0917 16:56:20.191098   21549 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0917 16:56:20.191662   21549 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 16:56:20.191942   21549 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 16:56:20.191977   21549 cni.go:84] Creating CNI manager for ""
	I0917 16:56:20.192037   21549 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 16:56:20.192046   21549 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 16:56:20.192135   21549 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:56:20.193539   21549 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0917 16:56:20.194835   21549 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/config.json ...
	I0917 16:56:20.194863   21549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/config.json: {Name:mk73a69cc461865d28915eacba554c9738276b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:20.195011   21549 start.go:360] acquireMachinesLock for minikube: {Name:mkf6ca58a29d7722db047e39d36e1d9ea30c76ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 16:56:20.195054   21549 start.go:364] duration metric: took 26.07µs to acquireMachinesLock for "minikube"
	I0917 16:56:20.195071   21549 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 16:56:20.195140   21549 start.go:125] createHost starting for "" (driver="none")
	I0917 16:56:20.196481   21549 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0917 16:56:20.197637   21549 exec_runner.go:51] Run: systemctl --version
	I0917 16:56:20.199918   21549 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0917 16:56:20.199943   21549 client.go:168] LocalClient.Create starting
	I0917 16:56:20.199995   21549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-10973/.minikube/certs/ca.pem
	I0917 16:56:20.200020   21549 main.go:141] libmachine: Decoding PEM data...
	I0917 16:56:20.200033   21549 main.go:141] libmachine: Parsing certificate...
	I0917 16:56:20.200094   21549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19662-10973/.minikube/certs/cert.pem
	I0917 16:56:20.200127   21549 main.go:141] libmachine: Decoding PEM data...
	I0917 16:56:20.200144   21549 main.go:141] libmachine: Parsing certificate...
	I0917 16:56:20.200455   21549 client.go:171] duration metric: took 505.959µs to LocalClient.Create
	I0917 16:56:20.200477   21549 start.go:167] duration metric: took 560.505µs to libmachine.API.Create "minikube"
	I0917 16:56:20.200484   21549 start.go:293] postStartSetup for "minikube" (driver="none")
	I0917 16:56:20.200527   21549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 16:56:20.200557   21549 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 16:56:20.209430   21549 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 16:56:20.209448   21549 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 16:56:20.209456   21549 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 16:56:20.211521   21549 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0917 16:56:20.212878   21549 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-10973/.minikube/addons for local assets ...
	I0917 16:56:20.212926   21549 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-10973/.minikube/files for local assets ...
	I0917 16:56:20.212955   21549 start.go:296] duration metric: took 12.461156ms for postStartSetup
	I0917 16:56:20.213586   21549 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/config.json ...
	I0917 16:56:20.213729   21549 start.go:128] duration metric: took 18.578432ms to createHost
	I0917 16:56:20.213745   21549 start.go:83] releasing machines lock for "minikube", held for 18.679495ms
	I0917 16:56:20.214144   21549 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 16:56:20.214239   21549 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0917 16:56:20.216533   21549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 16:56:20.216587   21549 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 16:56:20.224799   21549 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 16:56:20.224819   21549 start.go:495] detecting cgroup driver to use...
	I0917 16:56:20.224846   21549 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 16:56:20.224928   21549 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 16:56:20.242239   21549 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 16:56:20.250530   21549 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 16:56:20.259350   21549 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 16:56:20.259396   21549 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 16:56:20.268535   21549 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 16:56:20.277241   21549 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 16:56:20.285549   21549 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 16:56:20.293667   21549 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 16:56:20.302112   21549 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 16:56:20.309960   21549 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 16:56:20.318030   21549 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 16:56:20.326784   21549 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 16:56:20.336881   21549 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 16:56:20.343847   21549 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0917 16:56:20.537578   21549 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0917 16:56:20.606503   21549 start.go:495] detecting cgroup driver to use...
	I0917 16:56:20.606563   21549 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 16:56:20.606680   21549 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 16:56:20.625634   21549 exec_runner.go:51] Run: which cri-dockerd
	I0917 16:56:20.626560   21549 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 16:56:20.634735   21549 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0917 16:56:20.634754   21549 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0917 16:56:20.634788   21549 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0917 16:56:20.641620   21549 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 16:56:20.641753   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2584503129 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0917 16:56:20.648836   21549 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0917 16:56:20.857208   21549 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0917 16:56:21.067040   21549 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 16:56:21.067197   21549 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0917 16:56:21.067213   21549 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0917 16:56:21.067252   21549 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0917 16:56:21.075266   21549 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0917 16:56:21.075405   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4133767590 /etc/docker/daemon.json
	I0917 16:56:21.083389   21549 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0917 16:56:21.292418   21549 exec_runner.go:51] Run: sudo systemctl restart docker
	I0917 16:56:21.584255   21549 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 16:56:21.594852   21549 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0917 16:56:21.616239   21549 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 16:56:21.626511   21549 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0917 16:56:21.829136   21549 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0917 16:56:22.036083   21549 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0917 16:56:22.245887   21549 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0917 16:56:22.259945   21549 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 16:56:22.270283   21549 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0917 16:56:22.472965   21549 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0917 16:56:22.538178   21549 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 16:56:22.538253   21549 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0917 16:56:22.540172   21549 start.go:563] Will wait 60s for crictl version
	I0917 16:56:22.540216   21549 exec_runner.go:51] Run: which crictl
	I0917 16:56:22.541081   21549 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0917 16:56:22.571811   21549 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 16:56:22.571863   21549 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0917 16:56:22.592066   21549 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0917 16:56:22.614306   21549 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 16:56:22.614388   21549 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0917 16:56:22.616955   21549 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0917 16:56:22.617993   21549 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 16:56:22.618123   21549 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:56:22.618139   21549 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0917 16:56:22.618247   21549 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0917 16:56:22.618316   21549 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0917 16:56:22.663260   21549 cni.go:84] Creating CNI manager for ""
	I0917 16:56:22.663284   21549 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 16:56:22.663293   21549 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 16:56:22.663317   21549 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 16:56:22.663477   21549 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 16:56:22.663541   21549 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 16:56:22.671198   21549 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0917 16:56:22.671240   21549 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0917 16:56:22.678377   21549 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0917 16:56:22.678397   21549 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0917 16:56:22.678400   21549 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0917 16:56:22.678430   21549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-10973/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0917 16:56:22.678437   21549 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0917 16:56:22.678438   21549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-10973/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0917 16:56:22.689006   21549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-10973/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0917 16:56:22.726107   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube909575354 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0917 16:56:22.737932   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2542862677 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0917 16:56:22.768683   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3415606232 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0917 16:56:22.833707   21549 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 16:56:22.841478   21549 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0917 16:56:22.841498   21549 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0917 16:56:22.841536   21549 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0917 16:56:22.848784   21549 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0917 16:56:22.848978   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2764826640 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0917 16:56:22.858109   21549 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0917 16:56:22.858125   21549 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0917 16:56:22.858152   21549 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0917 16:56:22.865381   21549 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 16:56:22.865531   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2155642184 /lib/systemd/system/kubelet.service
	I0917 16:56:22.873913   21549 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0917 16:56:22.874029   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4188745215 /var/tmp/minikube/kubeadm.yaml.new
	I0917 16:56:22.881324   21549 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0917 16:56:22.882615   21549 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0917 16:56:23.108225   21549 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0917 16:56:23.121812   21549 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube for IP: 10.138.0.48
	I0917 16:56:23.121830   21549 certs.go:194] generating shared ca certs ...
	I0917 16:56:23.121848   21549 certs.go:226] acquiring lock for ca certs: {Name:mk77f96de1799d1206a47099b46138b8eb0312f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.121973   21549 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-10973/.minikube/ca.key
	I0917 16:56:23.122016   21549 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-10973/.minikube/proxy-client-ca.key
	I0917 16:56:23.122026   21549 certs.go:256] generating profile certs ...
	I0917 16:56:23.122082   21549 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/client.key
	I0917 16:56:23.122101   21549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/client.crt with IP's: []
	I0917 16:56:23.237816   21549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/client.crt ...
	I0917 16:56:23.237842   21549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/client.crt: {Name:mk401703ead965cc91ad478f47b8655f1484908c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.237971   21549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/client.key ...
	I0917 16:56:23.237983   21549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/client.key: {Name:mk156b682e35a102b1cb9cac99a314532b9daa15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.238046   21549 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0917 16:56:23.238060   21549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0917 16:56:23.417291   21549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0917 16:56:23.417319   21549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mkecd5044bd600597617dc72d1e958196a5322ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.417440   21549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0917 16:56:23.417453   21549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mk9cd2c0ad20f3062effb2dcf31df3f9b682e8c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.417508   21549 certs.go:381] copying /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/apiserver.crt
	I0917 16:56:23.417575   21549 certs.go:385] copying /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/apiserver.key
	I0917 16:56:23.417625   21549 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/proxy-client.key
	I0917 16:56:23.417638   21549 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0917 16:56:23.469805   21549 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/proxy-client.crt ...
	I0917 16:56:23.469833   21549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/proxy-client.crt: {Name:mk012179ea8de4e49e990d20e5bc7f9b728a785c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.469954   21549 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/proxy-client.key ...
	I0917 16:56:23.469964   21549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/proxy-client.key: {Name:mk39fa8096a8e79e1fc769e01a8ce454a8a17142 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:23.470108   21549 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-10973/.minikube/certs/ca-key.pem (1679 bytes)
	I0917 16:56:23.470139   21549 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-10973/.minikube/certs/ca.pem (1082 bytes)
	I0917 16:56:23.470160   21549 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-10973/.minikube/certs/cert.pem (1123 bytes)
	I0917 16:56:23.470181   21549 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-10973/.minikube/certs/key.pem (1679 bytes)
	I0917 16:56:23.470702   21549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-10973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 16:56:23.470809   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube637789322 /var/lib/minikube/certs/ca.crt
	I0917 16:56:23.478858   21549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-10973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 16:56:23.479018   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2140184775 /var/lib/minikube/certs/ca.key
	I0917 16:56:23.486671   21549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-10973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 16:56:23.486805   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2496347069 /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 16:56:23.494433   21549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-10973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 16:56:23.494554   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3036871451 /var/lib/minikube/certs/proxy-client-ca.key
	I0917 16:56:23.502848   21549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0917 16:56:23.503030   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube625575790 /var/lib/minikube/certs/apiserver.crt
	I0917 16:56:23.510374   21549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 16:56:23.510501   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1595773951 /var/lib/minikube/certs/apiserver.key
	I0917 16:56:23.518001   21549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 16:56:23.518134   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube403374119 /var/lib/minikube/certs/proxy-client.crt
	I0917 16:56:23.525831   21549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-10973/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 16:56:23.525940   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube424807759 /var/lib/minikube/certs/proxy-client.key
	I0917 16:56:23.534156   21549 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0917 16:56:23.534174   21549 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:23.534201   21549 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:23.542091   21549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-10973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 16:56:23.542226   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1874047121 /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:23.549784   21549 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 16:56:23.549881   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3580451430 /var/lib/minikube/kubeconfig
	I0917 16:56:23.558572   21549 exec_runner.go:51] Run: openssl version
	I0917 16:56:23.561563   21549 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 16:56:23.569693   21549 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:23.570944   21549 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:23.570984   21549 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:23.573724   21549 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 16:56:23.581414   21549 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 16:56:23.582480   21549 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 16:56:23.582518   21549 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:56:23.582612   21549 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 16:56:23.597875   21549 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 16:56:23.606107   21549 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 16:56:23.613839   21549 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0917 16:56:23.635064   21549 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 16:56:23.643054   21549 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 16:56:23.643072   21549 kubeadm.go:157] found existing configuration files:
	
	I0917 16:56:23.643104   21549 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 16:56:23.650368   21549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 16:56:23.650410   21549 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 16:56:23.657534   21549 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 16:56:23.664706   21549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 16:56:23.664741   21549 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 16:56:23.671860   21549 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 16:56:23.679040   21549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 16:56:23.679081   21549 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 16:56:23.686028   21549 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 16:56:23.693291   21549 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 16:56:23.693331   21549 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 16:56:23.700313   21549 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 16:56:23.735031   21549 kubeadm.go:310] W0917 16:56:23.734900   22420 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 16:56:23.735558   21549 kubeadm.go:310] W0917 16:56:23.735509   22420 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 16:56:23.737171   21549 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 16:56:23.737203   21549 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 16:56:23.835442   21549 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 16:56:23.835553   21549 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 16:56:23.835563   21549 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 16:56:23.835568   21549 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 16:56:23.847173   21549 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 16:56:23.850154   21549 out.go:235]   - Generating certificates and keys ...
	I0917 16:56:23.850187   21549 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 16:56:23.850212   21549 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 16:56:23.927037   21549 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 16:56:24.156452   21549 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 16:56:24.348230   21549 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 16:56:24.530821   21549 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 16:56:24.719645   21549 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 16:56:24.719778   21549 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0917 16:56:24.780502   21549 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 16:56:24.780581   21549 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0917 16:56:24.921272   21549 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 16:56:24.989569   21549 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 16:56:25.084653   21549 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 16:56:25.084858   21549 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 16:56:25.499538   21549 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 16:56:25.609748   21549 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 16:56:25.746104   21549 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 16:56:26.011934   21549 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 16:56:26.106037   21549 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 16:56:26.106802   21549 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 16:56:26.110284   21549 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 16:56:26.112167   21549 out.go:235]   - Booting up control plane ...
	I0917 16:56:26.112194   21549 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 16:56:26.112211   21549 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 16:56:26.113110   21549 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 16:56:26.136089   21549 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 16:56:26.140526   21549 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 16:56:26.140551   21549 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 16:56:26.367826   21549 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 16:56:26.367849   21549 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 16:56:26.869406   21549 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.561027ms
	I0917 16:56:26.869427   21549 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 16:56:30.871205   21549 kubeadm.go:310] [api-check] The API server is healthy after 4.001773096s
	I0917 16:56:30.882411   21549 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 16:56:30.893520   21549 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 16:56:30.910832   21549 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 16:56:30.910862   21549 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 16:56:30.917952   21549 kubeadm.go:310] [bootstrap-token] Using token: axr8do.3dchamcq8j0kmuea
	I0917 16:56:30.919418   21549 out.go:235]   - Configuring RBAC rules ...
	I0917 16:56:30.919453   21549 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 16:56:30.922696   21549 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 16:56:30.927794   21549 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 16:56:30.930405   21549 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 16:56:30.933190   21549 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 16:56:30.936622   21549 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 16:56:31.277421   21549 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 16:56:31.699254   21549 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 16:56:32.276978   21549 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 16:56:32.277867   21549 kubeadm.go:310] 
	I0917 16:56:32.277881   21549 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 16:56:32.277886   21549 kubeadm.go:310] 
	I0917 16:56:32.277891   21549 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 16:56:32.277895   21549 kubeadm.go:310] 
	I0917 16:56:32.277901   21549 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 16:56:32.277905   21549 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 16:56:32.277908   21549 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 16:56:32.277912   21549 kubeadm.go:310] 
	I0917 16:56:32.277915   21549 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 16:56:32.277918   21549 kubeadm.go:310] 
	I0917 16:56:32.277921   21549 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 16:56:32.277924   21549 kubeadm.go:310] 
	I0917 16:56:32.277928   21549 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 16:56:32.277931   21549 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 16:56:32.277935   21549 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 16:56:32.277939   21549 kubeadm.go:310] 
	I0917 16:56:32.277944   21549 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 16:56:32.277948   21549 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 16:56:32.277958   21549 kubeadm.go:310] 
	I0917 16:56:32.277962   21549 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token axr8do.3dchamcq8j0kmuea \
	I0917 16:56:32.277968   21549 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:67538af7c9fbbd8d1e5c1c3e603715c05c80bfbf2b391af04235e146e003a1a0 \
	I0917 16:56:32.277973   21549 kubeadm.go:310] 	--control-plane 
	I0917 16:56:32.277977   21549 kubeadm.go:310] 
	I0917 16:56:32.277982   21549 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 16:56:32.277986   21549 kubeadm.go:310] 
	I0917 16:56:32.277990   21549 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token axr8do.3dchamcq8j0kmuea \
	I0917 16:56:32.277994   21549 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:67538af7c9fbbd8d1e5c1c3e603715c05c80bfbf2b391af04235e146e003a1a0 
	I0917 16:56:32.280884   21549 cni.go:84] Creating CNI manager for ""
	I0917 16:56:32.280913   21549 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 16:56:32.282683   21549 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 16:56:32.284022   21549 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0917 16:56:32.293830   21549 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 16:56:32.293957   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3106448852 /etc/cni/net.d/1-k8s.conflist
	I0917 16:56:32.303954   21549 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 16:56:32.304038   21549 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:32.304040   21549 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_17T16_56_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0917 16:56:32.313339   21549 ops.go:34] apiserver oom_adj: -16
	I0917 16:56:32.381360   21549 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:32.881643   21549 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:33.382019   21549 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:33.881994   21549 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:34.381677   21549 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:34.881472   21549 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:35.382332   21549 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:35.882328   21549 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:36.381844   21549 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:36.882235   21549 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:36.947732   21549 kubeadm.go:1113] duration metric: took 4.643758846s to wait for elevateKubeSystemPrivileges
	I0917 16:56:36.947767   21549 kubeadm.go:394] duration metric: took 13.365252066s to StartCluster
	I0917 16:56:36.947789   21549 settings.go:142] acquiring lock: {Name:mk0f1c0a7bd999e41ff8c2bc06778e387aee7a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:36.947873   21549 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-10973/kubeconfig
	I0917 16:56:36.948528   21549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-10973/kubeconfig: {Name:mkd9985159996870446a987c3ca25a818890b365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:36.948759   21549 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 16:56:36.948830   21549 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 16:56:36.948962   21549 addons.go:69] Setting yakd=true in profile "minikube"
	I0917 16:56:36.948978   21549 addons.go:234] Setting addon yakd=true in "minikube"
	I0917 16:56:36.948991   21549 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 16:56:36.949012   21549 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0917 16:56:36.949021   21549 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0917 16:56:36.949029   21549 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0917 16:56:36.949033   21549 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0917 16:56:36.949037   21549 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0917 16:56:36.948990   21549 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0917 16:56:36.949044   21549 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0917 16:56:36.949017   21549 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0917 16:56:36.949054   21549 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0917 16:56:36.949056   21549 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0917 16:56:36.949056   21549 addons.go:69] Setting registry=true in profile "minikube"
	I0917 16:56:36.949064   21549 mustload.go:65] Loading cluster: minikube
	I0917 16:56:36.949073   21549 addons.go:234] Setting addon registry=true in "minikube"
	I0917 16:56:36.949081   21549 host.go:66] Checking if "minikube" exists ...
	I0917 16:56:36.949091   21549 host.go:66] Checking if "minikube" exists ...
	I0917 16:56:36.949101   21549 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0917 16:56:36.949117   21549 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0917 16:56:36.949122   21549 addons.go:69] Setting volcano=true in profile "minikube"
	I0917 16:56:36.949138   21549 addons.go:234] Setting addon volcano=true in "minikube"
	I0917 16:56:36.949159   21549 host.go:66] Checking if "minikube" exists ...
	I0917 16:56:36.949230   21549 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 16:56:36.949344   21549 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0917 16:56:36.949371   21549 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0917 16:56:36.949406   21549 host.go:66] Checking if "minikube" exists ...
	I0917 16:56:36.949661   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:36.949679   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:36.949713   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:36.949713   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:36.949727   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:36.949000   21549 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0917 16:56:36.949747   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:36.949759   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:36.949760   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:36.949762   21549 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0917 16:56:36.949786   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:36.949092   21549 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0917 16:56:36.949817   21549 host.go:66] Checking if "minikube" exists ...
	I0917 16:56:36.949661   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:36.949875   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:36.949903   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:36.949982   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:36.949994   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:36.950028   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:36.949713   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:36.950426   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:36.950462   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:36.950500   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:36.950515   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:36.950547   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:36.950635   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:36.950650   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:36.950682   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:36.950902   21549 out.go:177] * Configuring local host environment ...
	I0917 16:56:36.949034   21549 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0917 16:56:36.949013   21549 host.go:66] Checking if "minikube" exists ...
	I0917 16:56:36.949101   21549 host.go:66] Checking if "minikube" exists ...
	I0917 16:56:36.949046   21549 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0917 16:56:36.949002   21549 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0917 16:56:36.949786   21549 host.go:66] Checking if "minikube" exists ...
	I0917 16:56:36.951615   21549 host.go:66] Checking if "minikube" exists ...
	I0917 16:56:36.952247   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:36.952261   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:36.952290   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:36.951212   21549 host.go:66] Checking if "minikube" exists ...
	I0917 16:56:36.953774   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:36.953794   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:36.953823   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:36.953870   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:36.953890   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:36.953921   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:36.954039   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:36.954058   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:36.954086   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:36.953231   21549 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0917 16:56:36.954166   21549 host.go:66] Checking if "minikube" exists ...
	W0917 16:56:36.954191   21549 out.go:270] * 
	W0917 16:56:36.954216   21549 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0917 16:56:36.954223   21549 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0917 16:56:36.954242   21549 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0917 16:56:36.954252   21549 out.go:270] * 
	W0917 16:56:36.954292   21549 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0917 16:56:36.954301   21549 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0917 16:56:36.954307   21549 out.go:270] * 
	W0917 16:56:36.954336   21549 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0917 16:56:36.954346   21549 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0917 16:56:36.954352   21549 out.go:270] * 
	W0917 16:56:36.954359   21549 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0917 16:56:36.954384   21549 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 16:56:36.954794   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:36.954838   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:36.954876   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:36.957213   21549 out.go:177] * Verifying Kubernetes components...
	I0917 16:56:36.962673   21549 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0917 16:56:36.973122   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:36.974728   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:36.977684   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:36.982755   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:36.982779   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:36.982815   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:36.989260   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:36.993932   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:36.994415   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:36.997903   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:37.000578   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:37.005563   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:37.006072   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:37.009933   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:37.014073   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.014129   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.015187   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:37.017214   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.017262   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.027062   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.027060   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.027122   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.027136   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.029000   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.029051   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.030172   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:37.030437   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.030511   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.031521   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.031565   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.032783   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.032832   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.033909   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.033957   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.034133   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.034173   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.037974   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.038026   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.041913   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.041981   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.044472   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:37.044782   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.044809   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.045074   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.045094   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.048296   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.048317   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.048784   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.048802   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.050853   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.051140   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.053185   21549 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0917 16:56:37.053229   21549 host.go:66] Checking if "minikube" exists ...
	I0917 16:56:37.054033   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:37.054052   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:37.054085   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:37.054119   21549 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 16:56:37.054790   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.054808   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.055775   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.056048   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.056163   21549 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 16:56:37.056796   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 16:56:37.057067   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2142654959 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 16:56:37.058390   21549 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 16:56:37.059402   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.059957   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.059978   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.060064   21549 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 16:56:37.060120   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 16:56:37.060299   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2744719126 /etc/kubernetes/addons/deployment.yaml
	I0917 16:56:37.060863   21549 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 16:56:37.061234   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.061252   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.061467   21549 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0917 16:56:37.061503   21549 host.go:66] Checking if "minikube" exists ...
	I0917 16:56:37.062293   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:37.062311   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:37.062337   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:37.063136   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.063159   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.064012   21549 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 16:56:37.065468   21549 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 16:56:37.065835   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.065856   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.067796   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.067810   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.068028   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.068408   21549 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 16:56:37.069334   21549 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 16:56:37.070054   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.070140   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.072002   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.072428   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.075174   21549 host.go:66] Checking if "minikube" exists ...
	I0917 16:56:37.074672   21549 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 16:56:37.075274   21549 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 16:56:37.075004   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.075417   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1980918842 /etc/kubernetes/addons/yakd-ns.yaml
	I0917 16:56:37.076822   21549 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 16:56:37.077001   21549 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 16:56:37.077239   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.077257   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.078094   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.078195   21549 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 16:56:37.078256   21549 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 16:56:37.078278   21549 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 16:56:37.078412   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube863668725 /etc/kubernetes/addons/ig-namespace.yaml
	I0917 16:56:37.080343   21549 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 16:56:37.080407   21549 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 16:56:37.080428   21549 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 16:56:37.080543   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3382148847 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 16:56:37.083053   21549 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 16:56:37.083111   21549 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 16:56:37.084019   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:37.084415   21549 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 16:56:37.084437   21549 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 16:56:37.084550   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube236414170 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 16:56:37.084992   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.085630   21549 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 16:56:37.086691   21549 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 16:56:37.086711   21549 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 16:56:37.086826   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2465169662 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 16:56:37.088452   21549 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0917 16:56:37.090908   21549 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0917 16:56:37.093289   21549 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0917 16:56:37.096452   21549 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 16:56:37.096486   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0917 16:56:37.097453   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1240457384 /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 16:56:37.098958   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.099011   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.101120   21549 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 16:56:37.101156   21549 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 16:56:37.106164   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube323087323 /etc/kubernetes/addons/yakd-sa.yaml
	I0917 16:56:37.107960   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.107984   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.112830   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:37.114304   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.116011   21549 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0917 16:56:37.117225   21549 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0917 16:56:37.117262   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0917 16:56:37.117383   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2605356083 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0917 16:56:37.119820   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.119842   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.123881   21549 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 16:56:37.123937   21549 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 16:56:37.123962   21549 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 16:56:37.124162   21549 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 16:56:37.124182   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 16:56:37.124214   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube305534167 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 16:56:37.124285   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube115736585 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 16:56:37.124927   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.126307   21549 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 16:56:37.126782   21549 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 16:56:37.128266   21549 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:56:37.128287   21549 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0917 16:56:37.128296   21549 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:56:37.128334   21549 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:56:37.129349   21549 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 16:56:37.129378   21549 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 16:56:37.129518   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2476013529 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 16:56:37.131342   21549 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0917 16:56:37.131369   21549 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 16:56:37.131371   21549 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0917 16:56:37.131484   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2181085007 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0917 16:56:37.131843   21549 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 16:56:37.131865   21549 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 16:56:37.131984   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1314151169 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 16:56:37.135325   21549 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 16:56:37.135351   21549 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 16:56:37.135471   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube415044159 /etc/kubernetes/addons/yakd-crb.yaml
	I0917 16:56:37.138375   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.138421   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.138494   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:37.138540   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:37.140788   21549 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 16:56:37.140813   21549 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 16:56:37.140939   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2954118547 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 16:56:37.141751   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.141771   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.142767   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 16:56:37.143520   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1857493385 /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:56:37.148214   21549 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 16:56:37.148240   21549 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 16:56:37.148367   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2507674443 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 16:56:37.148695   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.154046   21549 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 16:56:37.157465   21549 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 16:56:37.158841   21549 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 16:56:37.158872   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 16:56:37.159092   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1137526535 /etc/kubernetes/addons/registry-rc.yaml
	I0917 16:56:37.159544   21549 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 16:56:37.159568   21549 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 16:56:37.159670   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2471709955 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 16:56:37.167045   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.167070   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.167671   21549 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 16:56:37.167693   21549 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 16:56:37.167802   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3524699489 /etc/kubernetes/addons/yakd-svc.yaml
	I0917 16:56:37.167950   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:37.167965   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:37.168443   21549 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 16:56:37.168462   21549 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 16:56:37.168494   21549 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:56:37.168568   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4257243421 /etc/kubernetes/addons/ig-role.yaml
	I0917 16:56:37.179598   21549 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 16:56:37.179623   21549 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0917 16:56:37.179728   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1034553314 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 16:56:37.180658   21549 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 16:56:37.183951   21549 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 16:56:37.183979   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 16:56:37.183989   21549 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 16:56:37.184012   21549 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 16:56:37.184141   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1090311702 /etc/kubernetes/addons/yakd-dp.yaml
	I0917 16:56:37.184188   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3424423984 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 16:56:37.184544   21549 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 16:56:37.184567   21549 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 16:56:37.184663   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3616946769 /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 16:56:37.188840   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.190796   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:37.190843   21549 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 16:56:37.190857   21549 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0917 16:56:37.190864   21549 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0917 16:56:37.190925   21549 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0917 16:56:37.191137   21549 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 16:56:37.191153   21549 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 16:56:37.191255   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3526248785 /etc/kubernetes/addons/registry-svc.yaml
	I0917 16:56:37.192400   21549 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 16:56:37.192475   21549 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 16:56:37.192641   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3620082163 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 16:56:37.192881   21549 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 16:56:37.194523   21549 out.go:177]   - Using image docker.io/busybox:stable
	I0917 16:56:37.196260   21549 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 16:56:37.196287   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 16:56:37.196417   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube106190911 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 16:56:37.205969   21549 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 16:56:37.206010   21549 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 16:56:37.206159   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube761628845 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 16:56:37.206182   21549 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 16:56:37.206206   21549 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 16:56:37.206318   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube752551433 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 16:56:37.209746   21549 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 16:56:37.210829   21549 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 16:56:37.218590   21549 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 16:56:37.218630   21549 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 16:56:37.218758   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube306544698 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 16:56:37.221820   21549 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 16:56:37.238516   21549 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 16:56:37.238709   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1658197659 /etc/kubernetes/addons/storageclass.yaml
	I0917 16:56:37.239105   21549 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 16:56:37.239132   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 16:56:37.239248   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2913991779 /etc/kubernetes/addons/registry-proxy.yaml
	I0917 16:56:37.242006   21549 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 16:56:37.242036   21549 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 16:56:37.242168   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube889392938 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 16:56:37.261014   21549 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 16:56:37.261047   21549 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 16:56:37.261178   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1047352187 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 16:56:37.262634   21549 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 16:56:37.275868   21549 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 16:56:37.275901   21549 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 16:56:37.276019   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3939686026 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 16:56:37.282706   21549 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 16:56:37.293824   21549 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:37.293852   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 16:56:37.293976   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3036694754 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:37.306695   21549 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 16:56:37.306728   21549 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 16:56:37.307426   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2075787777 /etc/kubernetes/addons/ig-crd.yaml
	I0917 16:56:37.314647   21549 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 16:56:37.344718   21549 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 16:56:37.344754   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 16:56:37.344883   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube103694644 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 16:56:37.345182   21549 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 16:56:37.345204   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 16:56:37.345313   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2214255554 /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 16:56:37.384932   21549 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:37.402550   21549 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 16:56:37.419005   21549 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 16:56:37.419084   21549 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 16:56:37.419344   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube451041671 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 16:56:37.468212   21549 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0917 16:56:37.523306   21549 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 16:56:37.523341   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 16:56:37.523597   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1493079225 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 16:56:37.525572   21549 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0917 16:56:37.529033   21549 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0917 16:56:37.529056   21549 node_ready.go:38] duration metric: took 3.457626ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0917 16:56:37.529077   21549 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 16:56:37.546959   21549 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9qpj2" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:37.648027   21549 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 16:56:37.648069   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 16:56:37.648226   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3193401580 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 16:56:37.832702   21549 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 16:56:37.832745   21549 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 16:56:37.832883   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3539632317 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 16:56:37.883098   21549 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0917 16:56:37.991397   21549 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 16:56:38.279517   21549 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.068642831s)
	I0917 16:56:38.308745   21549 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.046056447s)
	I0917 16:56:38.308788   21549 addons.go:475] Verifying addon registry=true in "minikube"
	I0917 16:56:38.310318   21549 out.go:177] * Verifying registry addon...
	I0917 16:56:38.312545   21549 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 16:56:38.316198   21549 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 16:56:38.316217   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:38.388450   21549 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0917 16:56:38.445696   21549 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.235901164s)
	I0917 16:56:38.453109   21549 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0917 16:56:38.561905   21549 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.279151803s)
	I0917 16:56:38.561943   21549 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0917 16:56:38.592979   21549 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.371112957s)
	I0917 16:56:38.719463   21549 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.316858586s)
	I0917 16:56:38.817942   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:39.181300   21549 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.796303894s)
	W0917 16:56:39.181343   21549 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 16:56:39.181382   21549 retry.go:31] will retry after 268.572617ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 16:56:39.317605   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:39.450502   21549 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:39.567898   21549 pod_ready.go:103] pod "coredns-7c65d6cfc9-9qpj2" in "kube-system" namespace has status "Ready":"False"
	I0917 16:56:39.828603   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:40.234798   21549 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.110878574s)
	I0917 16:56:40.316291   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:40.646010   21549 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.195442559s)
	I0917 16:56:40.690734   21549 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.699281485s)
	I0917 16:56:40.690773   21549 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0917 16:56:40.692677   21549 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 16:56:40.697740   21549 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 16:56:40.705450   21549 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 16:56:40.705472   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:40.816127   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:41.203264   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:41.317379   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:41.703292   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:41.816498   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:42.053619   21549 pod_ready.go:103] pod "coredns-7c65d6cfc9-9qpj2" in "kube-system" namespace has status "Ready":"False"
	I0917 16:56:42.203483   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:42.317031   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:42.703260   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:42.816653   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:43.202557   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:43.317330   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:43.702947   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:43.816420   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:44.081810   21549 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 16:56:44.081952   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3293669463 /var/lib/minikube/google_application_credentials.json
	I0917 16:56:44.094406   21549 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 16:56:44.094541   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3687469075 /var/lib/minikube/google_cloud_project
	I0917 16:56:44.103087   21549 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0917 16:56:44.103144   21549 host.go:66] Checking if "minikube" exists ...
	I0917 16:56:44.103628   21549 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0917 16:56:44.103647   21549 api_server.go:166] Checking apiserver status ...
	I0917 16:56:44.103683   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:44.126984   21549 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22840/cgroup
	I0917 16:56:44.136921   21549 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731"
	I0917 16:56:44.136980   21549 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/50c770153cad080cde49c9bf489cb6e41f7ea6426c2127762d33d1b1983fa731/freezer.state
	I0917 16:56:44.146676   21549 api_server.go:204] freezer state: "THAWED"
	I0917 16:56:44.146712   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:44.150428   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:44.150493   21549 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 16:56:44.169107   21549 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:56:44.176632   21549 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 16:56:44.185145   21549 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 16:56:44.185193   21549 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 16:56:44.185332   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3217818598 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 16:56:44.194751   21549 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 16:56:44.194778   21549 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 16:56:44.194914   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3347046434 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 16:56:44.202445   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:44.203718   21549 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 16:56:44.203742   21549 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 16:56:44.203854   21549 exec_runner.go:51] Run: sudo cp -a /tmp/minikube863579863 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 16:56:44.213949   21549 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 16:56:44.316340   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:44.626361   21549 pod_ready.go:103] pod "coredns-7c65d6cfc9-9qpj2" in "kube-system" namespace has status "Ready":"False"
	I0917 16:56:44.858528   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:44.858990   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:44.939364   21549 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0917 16:56:44.941221   21549 out.go:177] * Verifying gcp-auth addon...
	I0917 16:56:44.943656   21549 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 16:56:44.960309   21549 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 16:56:45.053268   21549 pod_ready.go:93] pod "coredns-7c65d6cfc9-9qpj2" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:45.053294   21549 pod_ready.go:82] duration metric: took 7.506255232s for pod "coredns-7c65d6cfc9-9qpj2" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:45.053305   21549 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jsk7h" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:45.059168   21549 pod_ready.go:98] pod "coredns-7c65d6cfc9-jsk7h" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:45 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:36 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:45 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:45 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:36 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.48 HostIPs:[{IP:10.138.0.48}]
PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-17 16:56:36 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-17 16:56:38 +0000 UTC,FinishedAt:2024-09-17 16:56:44 +0000 UTC,ContainerID:docker://12eb32942f006adfba70adb145b3f8d1a5aba7282de6e6fbba601a7b92f01740,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://12eb32942f006adfba70adb145b3f8d1a5aba7282de6e6fbba601a7b92f01740 Started:0xc002593010 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002594bd0} {Name:kube-api-access-m5gjh MountPath:/var/run/secrets/kubernetes.io/serviceaccount R
eadOnly:true RecursiveReadOnly:0xc002594be0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0917 16:56:45.059203   21549 pod_ready.go:82] duration metric: took 5.889765ms for pod "coredns-7c65d6cfc9-jsk7h" in "kube-system" namespace to be "Ready" ...
	E0917 16:56:45.059215   21549 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-jsk7h" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:45 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:36 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:45 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:45 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:56:36 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.
48 HostIPs:[{IP:10.138.0.48}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-17 16:56:36 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-17 16:56:38 +0000 UTC,FinishedAt:2024-09-17 16:56:44 +0000 UTC,ContainerID:docker://12eb32942f006adfba70adb145b3f8d1a5aba7282de6e6fbba601a7b92f01740,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://12eb32942f006adfba70adb145b3f8d1a5aba7282de6e6fbba601a7b92f01740 Started:0xc002593010 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002594bd0} {Name:kube-api-access-m5gjh MountPath:/var/run/secrets/k
ubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002594be0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0917 16:56:45.059231   21549 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:45.202460   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:45.316863   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:45.703006   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:45.815929   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:46.065222   21549 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:46.065248   21549 pod_ready.go:82] duration metric: took 1.006006661s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:46.065261   21549 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:46.070010   21549 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:46.070050   21549 pod_ready.go:82] duration metric: took 4.772902ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:46.070062   21549 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:46.074326   21549 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:46.074346   21549 pod_ready.go:82] duration metric: took 4.274826ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:46.074358   21549 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gh688" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:46.205376   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:46.251466   21549 pod_ready.go:93] pod "kube-proxy-gh688" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:46.251491   21549 pod_ready.go:82] duration metric: took 177.124352ms for pod "kube-proxy-gh688" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:46.251504   21549 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:46.316907   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:46.702824   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:46.822793   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:47.051870   21549 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0917 16:56:47.051895   21549 pod_ready.go:82] duration metric: took 800.383686ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0917 16:56:47.051904   21549 pod_ready.go:39] duration metric: took 9.522814016s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 16:56:47.051927   21549 api_server.go:52] waiting for apiserver process to appear ...
	I0917 16:56:47.051981   21549 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:56:47.073062   21549 api_server.go:72] duration metric: took 10.118644512s to wait for apiserver process to appear ...
	I0917 16:56:47.073090   21549 api_server.go:88] waiting for apiserver healthz status ...
	I0917 16:56:47.073114   21549 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0917 16:56:47.077369   21549 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0917 16:56:47.078266   21549 api_server.go:141] control plane version: v1.31.1
	I0917 16:56:47.078292   21549 api_server.go:131] duration metric: took 5.194087ms to wait for apiserver health ...
	I0917 16:56:47.078302   21549 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 16:56:47.202209   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:47.256037   21549 system_pods.go:59] 17 kube-system pods found
	I0917 16:56:47.256071   21549 system_pods.go:61] "coredns-7c65d6cfc9-9qpj2" [080c228a-31be-4ade-9a02-68ef48f2ca0e] Running
	I0917 16:56:47.256083   21549 system_pods.go:61] "csi-hostpath-attacher-0" [25548efe-9818-480e-90dd-6171b5c0f937] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 16:56:47.256096   21549 system_pods.go:61] "csi-hostpath-resizer-0" [5a25b87c-c0d5-45ea-a309-d3e9c540acde] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 16:56:47.256108   21549 system_pods.go:61] "csi-hostpathplugin-cx9p4" [56814e30-bc55-441a-9a68-2a4046289c51] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 16:56:47.256116   21549 system_pods.go:61] "etcd-ubuntu-20-agent-2" [acec1466-e28f-4274-ace2-d1d0bec539c6] Running
	I0917 16:56:47.256123   21549 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [c897cfd6-e267-4b25-bed4-ab17bd209143] Running
	I0917 16:56:47.256131   21549 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [c7c5063e-2543-445c-b5bd-87e7da62e0a8] Running
	I0917 16:56:47.256142   21549 system_pods.go:61] "kube-proxy-gh688" [3380bac6-9728-4704-960e-e2aa8e092287] Running
	I0917 16:56:47.256149   21549 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [4c23873f-7d20-4076-b15d-4dc60defdb16] Running
	I0917 16:56:47.256162   21549 system_pods.go:61] "metrics-server-84c5f94fbc-8gkhd" [58aa5963-88a9-4e47-afc2-de6497453b8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 16:56:47.256170   21549 system_pods.go:61] "nvidia-device-plugin-daemonset-vmb9n" [6b75a499-8d81-416e-b4c9-21f3cd9a8422] Running
	I0917 16:56:47.256181   21549 system_pods.go:61] "registry-66c9cd494c-hpl5r" [19701c96-17bb-45cc-97c3-1363596536ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 16:56:47.256193   21549 system_pods.go:61] "registry-proxy-ntlkg" [0d579daa-daa4-4c2a-b2b7-bdad94bdc8d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 16:56:47.256204   21549 system_pods.go:61] "snapshot-controller-56fcc65765-9dbjm" [a9e9465a-ceaf-4ff2-87d5-2729d2d9a731] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 16:56:47.256217   21549 system_pods.go:61] "snapshot-controller-56fcc65765-zbvqb" [d52a4c9a-68cb-4ba0-88aa-cbb5bd766260] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 16:56:47.256225   21549 system_pods.go:61] "storage-provisioner" [cea639f8-7108-4343-92f0-44f0144e1e94] Running
	I0917 16:56:47.256239   21549 system_pods.go:61] "tiller-deploy-b48cc5f79-mpbps" [d9800363-2d33-4bb0-a102-d83d7acb6066] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0917 16:56:47.256251   21549 system_pods.go:74] duration metric: took 177.9416ms to wait for pod list to return data ...
	I0917 16:56:47.256264   21549 default_sa.go:34] waiting for default service account to be created ...
	I0917 16:56:47.315937   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:47.451371   21549 default_sa.go:45] found service account: "default"
	I0917 16:56:47.451396   21549 default_sa.go:55] duration metric: took 195.122053ms for default service account to be created ...
	I0917 16:56:47.451405   21549 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 16:56:47.659003   21549 system_pods.go:86] 17 kube-system pods found
	I0917 16:56:47.659033   21549 system_pods.go:89] "coredns-7c65d6cfc9-9qpj2" [080c228a-31be-4ade-9a02-68ef48f2ca0e] Running
	I0917 16:56:47.659046   21549 system_pods.go:89] "csi-hostpath-attacher-0" [25548efe-9818-480e-90dd-6171b5c0f937] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 16:56:47.659056   21549 system_pods.go:89] "csi-hostpath-resizer-0" [5a25b87c-c0d5-45ea-a309-d3e9c540acde] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 16:56:47.659070   21549 system_pods.go:89] "csi-hostpathplugin-cx9p4" [56814e30-bc55-441a-9a68-2a4046289c51] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 16:56:47.659081   21549 system_pods.go:89] "etcd-ubuntu-20-agent-2" [acec1466-e28f-4274-ace2-d1d0bec539c6] Running
	I0917 16:56:47.659088   21549 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [c897cfd6-e267-4b25-bed4-ab17bd209143] Running
	I0917 16:56:47.659092   21549 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [c7c5063e-2543-445c-b5bd-87e7da62e0a8] Running
	I0917 16:56:47.659096   21549 system_pods.go:89] "kube-proxy-gh688" [3380bac6-9728-4704-960e-e2aa8e092287] Running
	I0917 16:56:47.659100   21549 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [4c23873f-7d20-4076-b15d-4dc60defdb16] Running
	I0917 16:56:47.659105   21549 system_pods.go:89] "metrics-server-84c5f94fbc-8gkhd" [58aa5963-88a9-4e47-afc2-de6497453b8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 16:56:47.659111   21549 system_pods.go:89] "nvidia-device-plugin-daemonset-vmb9n" [6b75a499-8d81-416e-b4c9-21f3cd9a8422] Running
	I0917 16:56:47.659117   21549 system_pods.go:89] "registry-66c9cd494c-hpl5r" [19701c96-17bb-45cc-97c3-1363596536ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 16:56:47.659125   21549 system_pods.go:89] "registry-proxy-ntlkg" [0d579daa-daa4-4c2a-b2b7-bdad94bdc8d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 16:56:47.659131   21549 system_pods.go:89] "snapshot-controller-56fcc65765-9dbjm" [a9e9465a-ceaf-4ff2-87d5-2729d2d9a731] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 16:56:47.659154   21549 system_pods.go:89] "snapshot-controller-56fcc65765-zbvqb" [d52a4c9a-68cb-4ba0-88aa-cbb5bd766260] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 16:56:47.659164   21549 system_pods.go:89] "storage-provisioner" [cea639f8-7108-4343-92f0-44f0144e1e94] Running
	I0917 16:56:47.659172   21549 system_pods.go:89] "tiller-deploy-b48cc5f79-mpbps" [d9800363-2d33-4bb0-a102-d83d7acb6066] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0917 16:56:47.659187   21549 system_pods.go:126] duration metric: took 207.773007ms to wait for k8s-apps to be running ...
	I0917 16:56:47.659200   21549 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 16:56:47.659252   21549 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0917 16:56:47.682142   21549 system_svc.go:56] duration metric: took 22.93038ms WaitForService to wait for kubelet
	I0917 16:56:47.682171   21549 kubeadm.go:582] duration metric: took 10.727762858s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 16:56:47.682194   21549 node_conditions.go:102] verifying NodePressure condition ...
	I0917 16:56:47.702243   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:47.816826   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:47.852668   21549 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 16:56:47.852705   21549 node_conditions.go:123] node cpu capacity is 8
	I0917 16:56:47.852720   21549 node_conditions.go:105] duration metric: took 170.521082ms to run NodePressure ...
	I0917 16:56:47.852736   21549 start.go:241] waiting for startup goroutines ...
	I0917 16:56:48.203527   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:48.317427   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:48.701791   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:48.816206   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:49.201928   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:49.316257   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:49.704436   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:49.816128   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:50.202281   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:50.316256   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:50.702664   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:50.815671   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:51.202736   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:51.316554   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:51.702717   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:51.816833   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:52.202334   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:52.317135   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:52.702205   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:52.816932   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:53.205244   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:53.316709   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:53.702801   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:53.815993   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:54.202203   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:54.316725   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:54.702196   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:54.816317   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:55.202465   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:55.317140   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:55.702508   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:55.817067   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:56.202249   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:56.316941   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:56.702715   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:56.817117   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:57.201756   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:57.316744   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:57.701623   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:57.816424   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:58.202129   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:58.316598   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:58.701698   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:58.815897   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:56:59.202216   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:56:59.316789   21549 kapi.go:107] duration metric: took 21.004240097s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 16:56:59.701672   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:00.202097   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:00.702403   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:01.202637   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:01.702983   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:02.203332   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:02.702484   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:03.202655   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:03.702957   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:04.201212   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:04.701971   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:05.203476   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:05.703053   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:06.201665   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:06.702134   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:07.202302   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:07.701578   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:08.202584   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:08.702623   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:09.202129   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:09.702635   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:10.202906   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:10.759710   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:11.202516   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:11.702466   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:12.203283   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:12.702657   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:13.202734   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:13.702248   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:14.203676   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:14.702630   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:15.202305   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:15.702692   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:16.203563   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:16.702671   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:17.202241   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:17.702841   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:18.202156   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:18.701149   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:19.202789   21549 kapi.go:107] duration metric: took 38.505044913s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 16:57:26.447552   21549 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 16:57:26.447575   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:26.946994   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:27.447362   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:27.947873   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:28.446459   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:28.947254   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:29.446990   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:29.947269   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:30.447284   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:30.947537   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:31.447839   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:31.946568   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:32.446431   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:32.947708   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:33.446414   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:33.947304   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:34.447538   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:34.947520   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:35.447742   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:35.948791   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:36.447087   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:36.947177   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:37.447610   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:37.946172   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:38.447147   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:38.946489   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:39.447621   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:39.946401   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:40.446629   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:40.946762   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:41.446802   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:41.947014   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:42.446769   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:42.946742   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:43.447016   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:43.946608   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:44.446670   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:44.947571   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:45.446837   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:45.946733   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:46.446980   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:46.946769   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:47.446781   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:47.947695   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:48.447092   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:48.947135   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:49.447248   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:49.947129   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:50.446472   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:50.947742   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:51.446786   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:51.946993   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:52.447199   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:52.947471   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:53.447521   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:53.947397   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:54.447542   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:54.946392   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:55.447146   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:55.947240   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:56.446997   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:56.947079   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:57.447178   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:57.947276   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:58.447578   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:58.947448   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:59.447833   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:57:59.947023   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:00.447562   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:00.947833   21549 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:01.447235   21549 kapi.go:107] duration metric: took 1m16.50357622s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 16:58:01.449065   21549 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0917 16:58:01.450295   21549 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 16:58:01.451599   21549 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 16:58:01.452992   21549 out.go:177] * Enabled addons: default-storageclass, nvidia-device-plugin, cloud-spanner, storage-provisioner, helm-tiller, yakd, metrics-server, storage-provisioner-rancher, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0917 16:58:01.454271   21549 addons.go:510] duration metric: took 1m24.505442287s for enable addons: enabled=[default-storageclass nvidia-device-plugin cloud-spanner storage-provisioner helm-tiller yakd metrics-server storage-provisioner-rancher inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0917 16:58:01.454317   21549 start.go:246] waiting for cluster config update ...
	I0917 16:58:01.454348   21549 start.go:255] writing updated cluster config ...
	I0917 16:58:01.454577   21549 exec_runner.go:51] Run: rm -f paused
	I0917 16:58:01.500686   21549 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 16:58:01.502190   21549 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Mon 2024-08-05 23:30:02 UTC, end at Tue 2024-09-17 17:07:53 UTC. --
	Sep 17 17:00:21 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:00:21.166011397Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 17 17:00:21 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:00:21.166034832Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 17 17:00:21 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:00:21.168101699Z" level=error msg="Error running exec 0e174ea491db3611bcb00d493db0aeae3f2e9f507fed2d37cab3e5abf0315c4e in container: OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Sep 17 17:00:21 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:00:21.310357193Z" level=info msg="ignoring event" container=25fbb34498c733a2033ec515964c83b0e212aafef68ac81d96d4ca4cfb41e416 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:01:39 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:01:39.844805307Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 17 17:01:39 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:01:39.847008235Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 17 17:03:10 ubuntu-20-agent-2 cri-dockerd[22094]: time="2024-09-17T17:03:10Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 17 17:03:12 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:03:12.154800964Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 17 17:03:12 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:03:12.154809227Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 17 17:03:12 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:03:12.156899422Z" level=error msg="Error running exec 136ff7bb20dd78863c3cc23a5083f55e228914c0eb22dcdb89616dc740e7fdcd in container: OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Sep 17 17:03:12 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:03:12.362336357Z" level=info msg="ignoring event" container=42d90328e960f9938048fe0aa335ce1eac424e8a93002447bc35e73d15f7d3f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:04:23 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:04:23.830737524Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 17 17:04:23 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:04:23.833146485Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 17 17:06:52 ubuntu-20-agent-2 cri-dockerd[22094]: time="2024-09-17T17:06:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e47e0c7dcc6e05a09df26c649cbbf8e5011f0fd25e9045a6b7a477613f6aae97/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 17:06:53 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:06:53.046029801Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 17 17:06:53 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:06:53.047858481Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 17 17:07:08 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:07:08.824656426Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 17 17:07:08 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:07:08.826801788Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 17 17:07:34 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:07:34.842271932Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 17 17:07:34 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:07:34.844586504Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 17 17:07:52 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:07:52.508496252Z" level=info msg="ignoring event" container=e47e0c7dcc6e05a09df26c649cbbf8e5011f0fd25e9045a6b7a477613f6aae97 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:07:52 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:07:52.765278532Z" level=info msg="ignoring event" container=286478f31a2ec6061c234887f041c500c1afa8b5673327660ce83edc9bd1b658 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:07:52 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:07:52.828262297Z" level=info msg="ignoring event" container=ba51dc120c8de1734b2f3e991499bad3476dba3903a2316a96c123380ff82459 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:07:52 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:07:52.905634884Z" level=info msg="ignoring event" container=03b7ecd5049a0b0e4f170c053146253878e290cedbb23b030c4e8745fec84f35 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:07:53 ubuntu-20-agent-2 dockerd[21766]: time="2024-09-17T17:07:53.002290423Z" level=info msg="ignoring event" container=5d6c574ba16456274eeca699edfaacac0112533f0e158a6e560efb24db66db80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	42d90328e960f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            4 minutes ago       Exited              gadget                                   6                   b3cf36f551ed1       gadget-zgzlq
	aac712ff901ad       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   4386982616e20       gcp-auth-89d5ffd79-8qvzd
	9fdc2dd05bc3f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   4eca512276755       csi-hostpathplugin-cx9p4
	a8fd057f8b3c6       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   4eca512276755       csi-hostpathplugin-cx9p4
	1ce85f79c98bc       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   4eca512276755       csi-hostpathplugin-cx9p4
	cd0391c17f3ba       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   4eca512276755       csi-hostpathplugin-cx9p4
	e0ed9420faa40       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   4eca512276755       csi-hostpathplugin-cx9p4
	a141bbbb9a328       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   4eca512276755       csi-hostpathplugin-cx9p4
	f56a6217bd707       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   cce3639259c76       csi-hostpath-resizer-0
	ab7544030b093       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   9e7e1a28522f3       csi-hostpath-attacher-0
	ae703b01f7db7       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   e365ec34c37a1       snapshot-controller-56fcc65765-9dbjm
	a9d4a6f6f9b14       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   e1ad906d24a5d       snapshot-controller-56fcc65765-zbvqb
	49110e01c1cad       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   479235a7b30e7       yakd-dashboard-67d98fc6b-pll6s
	9511eef0ea377       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   cb7be09ae46cd       local-path-provisioner-86d989889c-lh2fz
	ba51dc120c8de       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              10 minutes ago      Exited              registry-proxy                           0                   5d6c574ba1645       registry-proxy-ntlkg
	39dfe3786e6d0       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   4fc2559529e4b       metrics-server-84c5f94fbc-8gkhd
	286478f31a2ec       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             11 minutes ago      Exited              registry                                 0                   03b7ecd5049a0       registry-66c9cd494c-hpl5r
	be27b9470c151       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  11 minutes ago      Running             tiller                                   0                   2c62a84f784c6       tiller-deploy-b48cc5f79-mpbps
	b30e6e44e63af       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               11 minutes ago      Running             cloud-spanner-emulator                   0                   a4cc3c9760216       cloud-spanner-emulator-769b77f747-8jbj4
	7ee296b1ba360       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   1192e884e3947       nvidia-device-plugin-daemonset-vmb9n
	3a66ff84e5ac2       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   3aa0b152221f2       storage-provisioner
	da7ecd388c422       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   52dae8d81ce4c       coredns-7c65d6cfc9-9qpj2
	75f60c514f232       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   38c4075b15794       kube-proxy-gh688
	65cda4c9ddb91       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   e79ac65dbfaf9       kube-scheduler-ubuntu-20-agent-2
	7e80dc52f6739       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   c7d1a35564d0a       kube-controller-manager-ubuntu-20-agent-2
	50c770153cad0       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   b44d262242c82       kube-apiserver-ubuntu-20-agent-2
	bc5bcd721b4b7       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   324e12d04e168       etcd-ubuntu-20-agent-2
	
	
	==> coredns [da7ecd388c42] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:46716 - 25869 "HINFO IN 5272589392702121106.6960695390282991374. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018139s
	[INFO] 10.244.0.24:43221 - 50682 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00031883s
	[INFO] 10.244.0.24:44633 - 55050 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00016096s
	[INFO] 10.244.0.24:35487 - 33231 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138185s
	[INFO] 10.244.0.24:51792 - 57535 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000177046s
	[INFO] 10.244.0.24:37677 - 19955 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000122973s
	[INFO] 10.244.0.24:34696 - 4038 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128064s
	[INFO] 10.244.0.24:43027 - 430 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00354286s
	[INFO] 10.244.0.24:44360 - 34144 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003622385s
	[INFO] 10.244.0.24:43701 - 114 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003197038s
	[INFO] 10.244.0.24:33612 - 25542 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003592389s
	[INFO] 10.244.0.24:60148 - 27313 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.001965419s
	[INFO] 10.244.0.24:59257 - 26997 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003788553s
	[INFO] 10.244.0.24:47372 - 28977 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001144661s
	[INFO] 10.244.0.24:38120 - 44340 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.002376177s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T16_56_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 16:56:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:07:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:03:40 +0000   Tue, 17 Sep 2024 16:56:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:03:40 +0000   Tue, 17 Sep 2024 16:56:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:03:40 +0000   Tue, 17 Sep 2024 16:56:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:03:40 +0000   Tue, 17 Sep 2024 16:56:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    73e1cb79-085e-4cd7-943d-79e01a2277ef
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-769b77f747-8jbj4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-zgzlq                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-8qvzd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-9qpj2                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-cx9p4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-gh688                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-8gkhd              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-vmb9n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-9dbjm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-zbvqb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 tiller-deploy-b48cc5f79-mpbps                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-lh2fz      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-pll6s               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e bb 0b 23 e8 3b 08 06
	[  +0.012627] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 8c 9b 48 df f3 08 06
	[  +2.699288] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 32 c5 3b 48 01 08 06
	[  +1.807369] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 72 9a 93 65 43 3f 08 06
	[  +2.191185] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 96 40 db 79 ac 08 06
	[  +3.889734] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e a8 f9 f1 ee 0d 08 06
	[  +1.004495] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 7c f9 59 f5 89 08 06
	[  +0.408204] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae ff 55 e8 f5 b5 08 06
	[  +0.057380] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 b5 2e 93 60 19 08 06
	[ +33.362950] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 de 32 59 90 52 08 06
	[  +0.026913] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 7c 8c fb 5e 67 08 06
	[Sep17 16:58] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 42 e3 6e 4d b5 08 06
	[  +0.000465] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 3c b6 85 97 b9 08 06
	
	
	==> etcd [bc5bcd721b4b] <==
	{"level":"info","ts":"2024-09-17T16:56:28.236523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
	{"level":"info","ts":"2024-09-17T16:56:28.236532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-17T16:56:28.236544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-17T16:56:28.236554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-17T16:56:28.237466Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T16:56:28.237466Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T16:56:28.237497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T16:56:28.237514Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T16:56:28.237654Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T16:56:28.237682Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T16:56:28.238171Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T16:56:28.238411Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T16:56:28.238447Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T16:56:28.238667Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T16:56:28.238683Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T16:56:28.239511Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-17T16:56:28.239630Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T16:56:44.622465Z","caller":"traceutil/trace.go:171","msg":"trace[1140881304] transaction","detail":"{read_only:false; response_revision:838; number_of_response:1; }","duration":"113.34046ms","start":"2024-09-17T16:56:44.509103Z","end":"2024-09-17T16:56:44.622444Z","steps":["trace[1140881304] 'process raft request'  (duration: 113.164959ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T16:56:44.855803Z","caller":"traceutil/trace.go:171","msg":"trace[279004550] linearizableReadLoop","detail":"{readStateIndex:861; appliedIndex:860; }","duration":"156.335499ms","start":"2024-09-17T16:56:44.699445Z","end":"2024-09-17T16:56:44.855781Z","steps":["trace[279004550] 'read index received'  (duration: 88.147277ms)","trace[279004550] 'applied index is now lower than readState.Index'  (duration: 68.187376ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T16:56:44.855858Z","caller":"traceutil/trace.go:171","msg":"trace[1117186504] transaction","detail":"{read_only:false; response_revision:841; number_of_response:1; }","duration":"165.061633ms","start":"2024-09-17T16:56:44.690775Z","end":"2024-09-17T16:56:44.855836Z","steps":["trace[1117186504] 'process raft request'  (duration: 96.884109ms)","trace[1117186504] 'compare'  (duration: 67.953208ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-17T16:56:44.855993Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.501677ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T16:56:44.856054Z","caller":"traceutil/trace.go:171","msg":"trace[1013416870] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:841; }","duration":"156.605514ms","start":"2024-09-17T16:56:44.699436Z","end":"2024-09-17T16:56:44.856042Z","steps":["trace[1013416870] 'agreement among raft nodes before linearized reading'  (duration: 156.442441ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:06:28.253688Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1720}
	{"level":"info","ts":"2024-09-17T17:06:28.277207Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1720,"took":"23.014801ms","hash":967477646,"current-db-size-bytes":8499200,"current-db-size":"8.5 MB","current-db-size-in-use-bytes":4378624,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-09-17T17:06:28.277253Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":967477646,"revision":1720,"compact-revision":-1}
	
	
	==> gcp-auth [aac712ff901a] <==
	2024/09/17 16:58:00 GCP Auth Webhook started!
	2024/09/17 16:58:17 Ready to marshal response ...
	2024/09/17 16:58:17 Ready to write response ...
	2024/09/17 16:58:18 Ready to marshal response ...
	2024/09/17 16:58:18 Ready to write response ...
	2024/09/17 16:58:40 Ready to marshal response ...
	2024/09/17 16:58:40 Ready to write response ...
	2024/09/17 16:58:40 Ready to marshal response ...
	2024/09/17 16:58:40 Ready to write response ...
	2024/09/17 16:58:40 Ready to marshal response ...
	2024/09/17 16:58:40 Ready to write response ...
	2024/09/17 17:06:52 Ready to marshal response ...
	2024/09/17 17:06:52 Ready to write response ...
	
	
	==> kernel <==
	 17:07:53 up 50 min,  0 users,  load average: 0.14, 0.28, 0.31
	Linux ubuntu-20-agent-2 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [50c770153cad] <==
	W0917 16:57:20.374032       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.238.67:443: connect: connection refused
	W0917 16:57:25.950911       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.141.16:443: connect: connection refused
	E0917 16:57:25.950956       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.141.16:443: connect: connection refused" logger="UnhandledError"
	W0917 16:57:47.959657       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.141.16:443: connect: connection refused
	E0917 16:57:47.959692       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.141.16:443: connect: connection refused" logger="UnhandledError"
	W0917 16:57:47.971692       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.141.16:443: connect: connection refused
	E0917 16:57:47.971732       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.141.16:443: connect: connection refused" logger="UnhandledError"
	I0917 16:58:17.765109       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0917 16:58:17.781388       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0917 16:58:30.142280       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0917 16:58:30.151146       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0917 16:58:30.238552       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0917 16:58:30.256833       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0917 16:58:30.268052       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0917 16:58:30.384105       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0917 16:58:30.385796       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0917 16:58:30.417896       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0917 16:58:30.590201       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0917 16:58:31.169277       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0917 16:58:31.444540       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0917 16:58:31.445746       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0917 16:58:31.445767       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0917 16:58:31.590426       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0917 16:58:31.590435       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0917 16:58:31.668070       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [7e80dc52f673] <==
	W0917 17:06:43.439064       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:06:43.439102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:06:48.799233       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:06:48.799277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:06:51.323044       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:06:51.323087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:06:53.987714       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:06:53.987752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:07:04.276863       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:07:04.276909       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:07:07.748910       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:07:07.748950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:07:19.913955       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:07:19.914005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:07:22.525782       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:07:22.525829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:07:24.589146       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:07:24.589187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:07:32.735324       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:07:32.735371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:07:38.916237       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:07:38.916278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:07:46.831653       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:07:46.831694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:07:52.732727       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="13.437µs"
	
	
	==> kube-proxy [75f60c514f23] <==
	I0917 16:56:37.721237       1 server_linux.go:66] "Using iptables proxy"
	I0917 16:56:37.920974       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0917 16:56:37.921045       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 16:56:38.008191       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 16:56:38.008270       1 server_linux.go:169] "Using iptables Proxier"
	I0917 16:56:38.012082       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 16:56:38.012392       1 server.go:483] "Version info" version="v1.31.1"
	I0917 16:56:38.012416       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 16:56:38.018962       1 config.go:328] "Starting node config controller"
	I0917 16:56:38.018982       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 16:56:38.019674       1 config.go:199] "Starting service config controller"
	I0917 16:56:38.019705       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 16:56:38.019769       1 config.go:105] "Starting endpoint slice config controller"
	I0917 16:56:38.019776       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 16:56:38.119094       1 shared_informer.go:320] Caches are synced for node config
	I0917 16:56:38.119944       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 16:56:38.119991       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [65cda4c9ddb9] <==
	W0917 16:56:29.136873       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 16:56:29.136916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:29.136963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0917 16:56:29.136991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:29.136997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0917 16:56:29.137019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:29.137002       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 16:56:29.137063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:29.137140       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:29.137356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:29.948118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:29.948159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:30.118810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:30.118854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:30.133645       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 16:56:30.133689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:30.140117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:30.140156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:30.141922       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 16:56:30.141951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:30.224645       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 16:56:30.224694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:30.247357       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 16:56:30.247402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 16:56:30.633398       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Mon 2024-08-05 23:30:02 UTC, end at Tue 2024-09-17 17:07:53 UTC. --
	Sep 17 17:07:34 ubuntu-20-agent-2 kubelet[22981]: E0917 17:07:34.846388   22981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ErrImagePull: \"Error response from daemon: Head \\\"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\\\": unauthorized: authentication failed\"" pod="default/registry-test" podUID="50584131-c6f4-456e-b3c9-fdcbd1b30727"
	Sep 17 17:07:43 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:43.687762   22981 scope.go:117] "RemoveContainer" containerID="42d90328e960f9938048fe0aa335ce1eac424e8a93002447bc35e73d15f7d3f1"
	Sep 17 17:07:43 ubuntu-20-agent-2 kubelet[22981]: E0917 17:07:43.687913   22981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-zgzlq_gadget(28e74438-b5be-466c-a85f-5e5478c67296)\"" pod="gadget/gadget-zgzlq" podUID="28e74438-b5be-466c-a85f-5e5478c67296"
	Sep 17 17:07:43 ubuntu-20-agent-2 kubelet[22981]: E0917 17:07:43.689807   22981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="e31778d7-4074-43c5-8a11-8b5bd6327ad0"
	Sep 17 17:07:47 ubuntu-20-agent-2 kubelet[22981]: E0917 17:07:47.688978   22981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="50584131-c6f4-456e-b3c9-fdcbd1b30727"
	Sep 17 17:07:52 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:52.714467   22981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/50584131-c6f4-456e-b3c9-fdcbd1b30727-gcp-creds\") pod \"50584131-c6f4-456e-b3c9-fdcbd1b30727\" (UID: \"50584131-c6f4-456e-b3c9-fdcbd1b30727\") "
	Sep 17 17:07:52 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:52.714538   22981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z88w\" (UniqueName: \"kubernetes.io/projected/50584131-c6f4-456e-b3c9-fdcbd1b30727-kube-api-access-6z88w\") pod \"50584131-c6f4-456e-b3c9-fdcbd1b30727\" (UID: \"50584131-c6f4-456e-b3c9-fdcbd1b30727\") "
	Sep 17 17:07:52 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:52.714584   22981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50584131-c6f4-456e-b3c9-fdcbd1b30727-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "50584131-c6f4-456e-b3c9-fdcbd1b30727" (UID: "50584131-c6f4-456e-b3c9-fdcbd1b30727"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 17 17:07:52 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:52.716626   22981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50584131-c6f4-456e-b3c9-fdcbd1b30727-kube-api-access-6z88w" (OuterVolumeSpecName: "kube-api-access-6z88w") pod "50584131-c6f4-456e-b3c9-fdcbd1b30727" (UID: "50584131-c6f4-456e-b3c9-fdcbd1b30727"). InnerVolumeSpecName "kube-api-access-6z88w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:07:52 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:52.815307   22981 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/50584131-c6f4-456e-b3c9-fdcbd1b30727-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 17 17:07:52 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:52.815352   22981 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6z88w\" (UniqueName: \"kubernetes.io/projected/50584131-c6f4-456e-b3c9-fdcbd1b30727-kube-api-access-6z88w\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 17 17:07:53 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:53.117077   22981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58wfz\" (UniqueName: \"kubernetes.io/projected/19701c96-17bb-45cc-97c3-1363596536ce-kube-api-access-58wfz\") pod \"19701c96-17bb-45cc-97c3-1363596536ce\" (UID: \"19701c96-17bb-45cc-97c3-1363596536ce\") "
	Sep 17 17:07:53 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:53.120192   22981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19701c96-17bb-45cc-97c3-1363596536ce-kube-api-access-58wfz" (OuterVolumeSpecName: "kube-api-access-58wfz") pod "19701c96-17bb-45cc-97c3-1363596536ce" (UID: "19701c96-17bb-45cc-97c3-1363596536ce"). InnerVolumeSpecName "kube-api-access-58wfz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:07:53 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:53.217561   22981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cbdq\" (UniqueName: \"kubernetes.io/projected/0d579daa-daa4-4c2a-b2b7-bdad94bdc8d8-kube-api-access-6cbdq\") pod \"0d579daa-daa4-4c2a-b2b7-bdad94bdc8d8\" (UID: \"0d579daa-daa4-4c2a-b2b7-bdad94bdc8d8\") "
	Sep 17 17:07:53 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:53.217676   22981 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-58wfz\" (UniqueName: \"kubernetes.io/projected/19701c96-17bb-45cc-97c3-1363596536ce-kube-api-access-58wfz\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 17 17:07:53 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:53.219836   22981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d579daa-daa4-4c2a-b2b7-bdad94bdc8d8-kube-api-access-6cbdq" (OuterVolumeSpecName: "kube-api-access-6cbdq") pod "0d579daa-daa4-4c2a-b2b7-bdad94bdc8d8" (UID: "0d579daa-daa4-4c2a-b2b7-bdad94bdc8d8"). InnerVolumeSpecName "kube-api-access-6cbdq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:07:53 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:53.318548   22981 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6cbdq\" (UniqueName: \"kubernetes.io/projected/0d579daa-daa4-4c2a-b2b7-bdad94bdc8d8-kube-api-access-6cbdq\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 17 17:07:53 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:53.358560   22981 scope.go:117] "RemoveContainer" containerID="ba51dc120c8de1734b2f3e991499bad3476dba3903a2316a96c123380ff82459"
	Sep 17 17:07:53 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:53.375127   22981 scope.go:117] "RemoveContainer" containerID="ba51dc120c8de1734b2f3e991499bad3476dba3903a2316a96c123380ff82459"
	Sep 17 17:07:53 ubuntu-20-agent-2 kubelet[22981]: E0917 17:07:53.377951   22981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ba51dc120c8de1734b2f3e991499bad3476dba3903a2316a96c123380ff82459" containerID="ba51dc120c8de1734b2f3e991499bad3476dba3903a2316a96c123380ff82459"
	Sep 17 17:07:53 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:53.378000   22981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ba51dc120c8de1734b2f3e991499bad3476dba3903a2316a96c123380ff82459"} err="failed to get container status \"ba51dc120c8de1734b2f3e991499bad3476dba3903a2316a96c123380ff82459\": rpc error: code = Unknown desc = Error response from daemon: No such container: ba51dc120c8de1734b2f3e991499bad3476dba3903a2316a96c123380ff82459"
	Sep 17 17:07:53 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:53.378030   22981 scope.go:117] "RemoveContainer" containerID="286478f31a2ec6061c234887f041c500c1afa8b5673327660ce83edc9bd1b658"
	Sep 17 17:07:53 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:53.396717   22981 scope.go:117] "RemoveContainer" containerID="286478f31a2ec6061c234887f041c500c1afa8b5673327660ce83edc9bd1b658"
	Sep 17 17:07:53 ubuntu-20-agent-2 kubelet[22981]: E0917 17:07:53.397593   22981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 286478f31a2ec6061c234887f041c500c1afa8b5673327660ce83edc9bd1b658" containerID="286478f31a2ec6061c234887f041c500c1afa8b5673327660ce83edc9bd1b658"
	Sep 17 17:07:53 ubuntu-20-agent-2 kubelet[22981]: I0917 17:07:53.397643   22981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"286478f31a2ec6061c234887f041c500c1afa8b5673327660ce83edc9bd1b658"} err="failed to get container status \"286478f31a2ec6061c234887f041c500c1afa8b5673327660ce83edc9bd1b658\": rpc error: code = Unknown desc = Error response from daemon: No such container: 286478f31a2ec6061c234887f041c500c1afa8b5673327660ce83edc9bd1b658"
	
	
	==> storage-provisioner [3a66ff84e5ac] <==
	I0917 16:56:39.078746       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 16:56:39.105045       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 16:56:39.105126       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 16:56:39.130122       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 16:56:39.130367       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_8baa5e9d-61e1-4b9d-aedc-c797bc8cd918!
	I0917 16:56:39.133017       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"474e46ee-c694-474a-b688-2a7004c44283", APIVersion:"v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_8baa5e9d-61e1-4b9d-aedc-c797bc8cd918 became leader
	I0917 16:56:39.230768       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_8baa5e9d-61e1-4b9d-aedc-c797bc8cd918!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-2/10.138.0.48
	Start Time:       Tue, 17 Sep 2024 16:58:40 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v8hgh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-v8hgh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m14s                   default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-2
	  Normal   Pulling    7m40s (x4 over 9m14s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m40s (x4 over 9m14s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m40s (x4 over 9m14s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m29s (x6 over 9m14s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m14s (x20 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.80s)

                                                
                                    

Test pass (111/168)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 2.56
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 1.1
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.54
22 TestOffline 42.12
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
27 TestAddons/Setup 101.38
29 TestAddons/serial/Volcano 38.44
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 11.44
36 TestAddons/parallel/MetricsServer 5.35
37 TestAddons/parallel/HelmTiller 9.3
39 TestAddons/parallel/CSI 53.11
40 TestAddons/parallel/Headlamp 15.86
41 TestAddons/parallel/CloudSpanner 5.24
43 TestAddons/parallel/NvidiaDevicePlugin 6.22
44 TestAddons/parallel/Yakd 10.39
45 TestAddons/StoppedEnableDisable 10.71
47 TestCertExpiration 226.41
58 TestFunctional/serial/CopySyncFile 0
59 TestFunctional/serial/StartWithProxy 26.04
60 TestFunctional/serial/AuditLog 0
61 TestFunctional/serial/SoftStart 33.89
62 TestFunctional/serial/KubeContext 0.04
63 TestFunctional/serial/KubectlGetPods 0.06
65 TestFunctional/serial/MinikubeKubectlCmd 0.1
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
67 TestFunctional/serial/ExtraConfig 36.99
68 TestFunctional/serial/ComponentHealth 0.07
69 TestFunctional/serial/LogsCmd 0.8
70 TestFunctional/serial/LogsFileCmd 0.86
71 TestFunctional/serial/InvalidService 4.67
73 TestFunctional/parallel/ConfigCmd 0.26
74 TestFunctional/parallel/DashboardCmd 9.62
75 TestFunctional/parallel/DryRun 0.15
76 TestFunctional/parallel/InternationalLanguage 0.08
77 TestFunctional/parallel/StatusCmd 0.43
80 TestFunctional/parallel/ProfileCmd/profile_not_create 0.24
81 TestFunctional/parallel/ProfileCmd/profile_list 0.21
82 TestFunctional/parallel/ProfileCmd/profile_json_output 0.21
84 TestFunctional/parallel/ServiceCmd/DeployApp 10.14
85 TestFunctional/parallel/ServiceCmd/List 0.33
86 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
87 TestFunctional/parallel/ServiceCmd/HTTPS 0.15
88 TestFunctional/parallel/ServiceCmd/Format 0.14
89 TestFunctional/parallel/ServiceCmd/URL 0.14
90 TestFunctional/parallel/ServiceCmdConnect 7.29
91 TestFunctional/parallel/AddonsCmd 0.11
92 TestFunctional/parallel/PersistentVolumeClaim 21.81
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.27
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.18
99 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
100 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
104 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
107 TestFunctional/parallel/MySQL 21.14
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 12.72
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 13.1
116 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/Version/short 0.04
121 TestFunctional/parallel/Version/components 0.38
122 TestFunctional/parallel/License 0.22
123 TestFunctional/delete_echo-server_images 0.03
124 TestFunctional/delete_my-image_image 0.01
125 TestFunctional/delete_minikube_cached_images 0.02
130 TestImageBuild/serial/Setup 14.03
131 TestImageBuild/serial/NormalBuild 1.62
132 TestImageBuild/serial/BuildWithBuildArg 0.83
133 TestImageBuild/serial/BuildWithDockerIgnore 0.59
134 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.55
138 TestJSONOutput/start/Command 25.39
139 TestJSONOutput/start/Audit 0
141 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
142 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
144 TestJSONOutput/pause/Command 0.49
145 TestJSONOutput/pause/Audit 0
147 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/unpause/Command 0.4
151 TestJSONOutput/unpause/Audit 0
153 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/stop/Command 5.31
157 TestJSONOutput/stop/Audit 0
159 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
161 TestErrorJSONOutput 0.19
166 TestMainNoArgs 0.04
167 TestMinikubeProfile 34.25
175 TestPause/serial/Start 28.78
176 TestPause/serial/SecondStartNoReconfiguration 34.04
177 TestPause/serial/Pause 0.48
178 TestPause/serial/VerifyStatus 0.13
179 TestPause/serial/Unpause 0.38
180 TestPause/serial/PauseAgain 0.52
181 TestPause/serial/DeletePaused 1.59
182 TestPause/serial/VerifyDeletedResources 0.06
196 TestRunningBinaryUpgrade 73.35
198 TestStoppedBinaryUpgrade/Setup 0.92
199 TestStoppedBinaryUpgrade/Upgrade 49.15
200 TestStoppedBinaryUpgrade/MinikubeLogs 0.76
201 TestKubernetesUpgrade 309.69
x
+
TestDownloadOnly/v1.20.0/json-events (2.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (2.559451507s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (2.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (53.949084ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:55:32
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:55:32.950744   17874 out.go:345] Setting OutFile to fd 1 ...
	I0917 16:55:32.950860   17874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:32.950870   17874 out.go:358] Setting ErrFile to fd 2...
	I0917 16:55:32.950875   17874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:32.951100   17874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-10973/.minikube/bin
	W0917 16:55:32.951243   17874 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19662-10973/.minikube/config/config.json: open /home/jenkins/minikube-integration/19662-10973/.minikube/config/config.json: no such file or directory
	I0917 16:55:32.951844   17874 out.go:352] Setting JSON to true
	I0917 16:55:32.952783   17874 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2278,"bootTime":1726589855,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 16:55:32.952877   17874 start.go:139] virtualization: kvm guest
	I0917 16:55:32.954997   17874 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0917 16:55:32.955121   17874 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19662-10973/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 16:55:32.955158   17874 notify.go:220] Checking for updates...
	I0917 16:55:32.956612   17874 out.go:169] MINIKUBE_LOCATION=19662
	I0917 16:55:32.958032   17874 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 16:55:32.959324   17874 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19662-10973/kubeconfig
	I0917 16:55:32.960561   17874 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-10973/.minikube
	I0917 16:55:32.961674   17874 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (1.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.104523762s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (1.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (55.152509ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:55:35
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:55:35.800912   18023 out.go:345] Setting OutFile to fd 1 ...
	I0917 16:55:35.801034   18023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:35.801048   18023 out.go:358] Setting ErrFile to fd 2...
	I0917 16:55:35.801056   18023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:35.801297   18023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-10973/.minikube/bin
	I0917 16:55:35.802094   18023 out.go:352] Setting JSON to true
	I0917 16:55:35.803321   18023 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2281,"bootTime":1726589855,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 16:55:35.803451   18023 start.go:139] virtualization: kvm guest
	I0917 16:55:35.805360   18023 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0917 16:55:35.805488   18023 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19662-10973/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 16:55:35.805574   18023 notify.go:220] Checking for updates...
	I0917 16:55:35.806671   18023 out.go:169] MINIKUBE_LOCATION=19662
	I0917 16:55:35.808067   18023 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 16:55:35.809302   18023 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19662-10973/kubeconfig
	I0917 16:55:35.810656   18023 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-10973/.minikube
	I0917 16:55:35.812004   18023 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:44271 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (42.12s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (40.5996679s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.515692256s)
--- PASS: TestOffline (42.12s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (44.422692ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (43.384276ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (101.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller: (1m41.377942633s)
--- PASS: TestAddons/Setup (101.38s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.44s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 8.968212ms
addons_test.go:913: volcano-controller stabilized in 8.99314ms
addons_test.go:905: volcano-admission stabilized in 9.013655ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-hx42g" [0e2c146a-02dc-4861-98bf-e1fa9bf859d5] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003163299s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-7br9g" [8c5d127e-9fe7-4426-bb49-322e807c367d] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003226771s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-tnlzj" [6b774b3e-95ac-4323-9db0-46d400e90e71] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002863169s
addons_test.go:932: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [2ead6d03-1ac8-4756-948d-36dc281ed09a] Pending
helpers_test.go:344: "test-job-nginx-0" [2ead6d03-1ac8-4756-948d-36dc281ed09a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [2ead6d03-1ac8-4756-948d-36dc281ed09a] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003200694s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.110121609s)
--- PASS: TestAddons/serial/Volcano (38.44s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.44s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zgzlq" [28e74438-b5be-466c-a85f-5e5478c67296] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00431863s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.430095499s)
--- PASS: TestAddons/parallel/InspektorGadget (11.44s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.84072ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-8gkhd" [58aa5963-88a9-4e47-afc2-de6497453b8f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003513907s
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.35s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.3s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.896758ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-mpbps" [d9800363-2d33-4bb0-a102-d83d7acb6066] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003357863s
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.020412763s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.485034ms
addons_test.go:570: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5bc89322-53f9-4f47-82d5-28a41f27718a] Pending
helpers_test.go:344: "task-pv-pod" [5bc89322-53f9-4f47-82d5-28a41f27718a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5bc89322-53f9-4f47-82d5-28a41f27718a] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003415869s
addons_test.go:590: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [50367757-2c51-4737-88c2-154de08e6cc5] Pending
helpers_test.go:344: "task-pv-pod-restore" [50367757-2c51-4737-88c2-154de08e6cc5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [50367757-2c51-4737-88c2-154de08e6cc5] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003763165s
addons_test.go:632: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.2540746s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.11s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-zddwv" [63ff82ff-a077-43a5-a26f-2b0f528deb06] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-zddwv" [63ff82ff-a077-43a5-a26f-2b0f528deb06] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-zddwv" [63ff82ff-a077-43a5-a26f-2b0f528deb06] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003456942s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.379856197s)
--- PASS: TestAddons/parallel/Headlamp (15.86s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-8jbj4" [eb19169c-2623-4ad2-9109-0eb8c54c5ec1] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003587446s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.22s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vmb9n" [6b75a499-8d81-416e-b4c9-21f3cd9a8422] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00413273s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.22s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-pll6s" [e783be14-2baf-4398-bd0f-232fa43f2804] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003689069s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.38153865s)
--- PASS: TestAddons/parallel/Yakd (10.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.428436682s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.71s)

                                                
                                    
x
+
TestCertExpiration (226.41s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (13.656543313s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (31.017779171s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.736417259s)
--- PASS: TestCertExpiration (226.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19662-10973/.minikube/files/etc/test/nested/copy/17862/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (26.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (26.037297835s)
--- PASS: TestFunctional/serial/StartWithProxy (26.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (33.890937792s)
functional_test.go:663: soft start took 33.891536396s for "minikube" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.984664866s)
functional_test.go:761: restart took 36.984794677s for "minikube" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.80s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd3015764177/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.86s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.67s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (155.128758ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://10.138.0.48:31155 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context minikube delete -f testdata/invalidsvc.yaml: (1.341882294s)
--- PASS: TestFunctional/serial/InvalidService (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (40.641507ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (40.111835ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/17 17:15:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 52981: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (76.97097ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-10973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-10973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:15:42.745296   53368 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:15:42.745521   53368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:15:42.745529   53368 out.go:358] Setting ErrFile to fd 2...
	I0917 17:15:42.745533   53368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:15:42.745720   53368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-10973/.minikube/bin
	I0917 17:15:42.746210   53368 out.go:352] Setting JSON to false
	I0917 17:15:42.747169   53368 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3488,"bootTime":1726589855,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 17:15:42.747271   53368 start.go:139] virtualization: kvm guest
	I0917 17:15:42.750435   53368 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 17:15:42.751699   53368 notify.go:220] Checking for updates...
	W0917 17:15:42.751689   53368 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19662-10973/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 17:15:42.751724   53368 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:15:42.753166   53368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:15:42.754601   53368 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-10973/kubeconfig
	I0917 17:15:42.756007   53368 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-10973/.minikube
	I0917 17:15:42.757350   53368 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 17:15:42.758528   53368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:15:42.760064   53368 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:15:42.760348   53368 exec_runner.go:51] Run: systemctl --version
	I0917 17:15:42.762714   53368 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:15:42.774426   53368 out.go:177] * Using the none driver based on existing profile
	I0917 17:15:42.775542   53368 start.go:297] selected driver: none
	I0917 17:15:42.775556   53368 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:15:42.775649   53368 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:15:42.775671   53368 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0917 17:15:42.775951   53368 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0917 17:15:42.777991   53368 out.go:201] 
	W0917 17:15:42.779213   53368 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0917 17:15:42.780430   53368 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (79.433159ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-10973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-10973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:15:42.902382   53398 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:15:42.902493   53398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:15:42.902502   53398 out.go:358] Setting ErrFile to fd 2...
	I0917 17:15:42.902507   53398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:15:42.902800   53398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-10973/.minikube/bin
	I0917 17:15:42.903348   53398 out.go:352] Setting JSON to false
	I0917 17:15:42.904286   53398 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3488,"bootTime":1726589855,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 17:15:42.904376   53398 start.go:139] virtualization: kvm guest
	I0917 17:15:42.906462   53398 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0917 17:15:42.907710   53398 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19662-10973/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 17:15:42.907743   53398 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:15:42.907788   53398 notify.go:220] Checking for updates...
	I0917 17:15:42.910255   53398 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:15:42.911555   53398 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-10973/kubeconfig
	I0917 17:15:42.912803   53398 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-10973/.minikube
	I0917 17:15:42.914040   53398 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 17:15:42.915354   53398 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:15:42.917092   53398 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:15:42.917528   53398 exec_runner.go:51] Run: systemctl --version
	I0917 17:15:42.920154   53398 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:15:42.930637   53398 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0917 17:15:42.932008   53398 start.go:297] selected driver: none
	I0917 17:15:42.932021   53398 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:15:42.932137   53398 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:15:42.932157   53398 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0917 17:15:42.932456   53398 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0917 17:15:42.934742   53398 out.go:201] 
	W0917 17:15:42.936035   53398 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 17:15:42.937411   53398 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "170.549876ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "42.643028ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "171.334874ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "42.30039ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-9rld6" [b1851ec9-bf3f-44b9-a5ee-7db07a5d15ee] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-9rld6" [b1851ec9-bf3f-44b9-a5ee-7db07a5d15ee] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003142373s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "331.43419ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.138.0.48:30226
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.138.0.48:30226
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-j8hc6" [c1a83aab-2821-4a40-9de6-f5ae0a135f24] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-j8hc6" [c1a83aab-2821-4a40-9de6-f5ae0a135f24] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003794605s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.138.0.48:30903
functional_test.go:1675: http://10.138.0.48:30903: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-j8hc6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.138.0.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.138.0.48:30903
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a0ef2ffc-f4cb-4f58-b0e0-953e39c0ad98] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003491932s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [489077d4-1281-4017-88b1-968b0759f759] Pending
helpers_test.go:344: "sp-pod" [489077d4-1281-4017-88b1-968b0759f759] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [489077d4-1281-4017-88b1-968b0759f759] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003484072s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml: (1.133705996s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ee258a84-455a-4dba-ae0f-d84916dbca43] Pending
helpers_test.go:344: "sp-pod" [ee258a84-455a-4dba-ae0f-d84916dbca43] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ee258a84-455a-4dba-ae0f-d84916dbca43] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003704793s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.81s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 55079: operation not permitted
helpers_test.go:508: unable to kill pid 55031: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context minikube apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b00ca123-d61c-4e22-ba24-cf9da406cbec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b00ca123-d61c-4e22-ba24-cf9da406cbec] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003423292s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context minikube get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.109.60 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-kvtgk" [b4634b23-9507-4d63-a8b1-493de2a182b0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-kvtgk" [b4634b23-9507-4d63-a8b1-493de2a182b0] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.003665045s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-kvtgk -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-kvtgk -- mysql -ppassword -e "show databases;": exit status 1 (111.076637ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-kvtgk -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-kvtgk -- mysql -ppassword -e "show databases;": exit status 1 (118.995284ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-kvtgk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (12.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (12.718956378s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (12.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (13.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.100578942s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (13.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.027869824s)
--- PASS: TestImageBuild/serial/Setup (14.03s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.62s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.624169283s)
--- PASS: TestImageBuild/serial/NormalBuild (1.62s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.59s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (25.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (25.387632983s)
--- PASS: TestJSONOutput/start/Command (25.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.4s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.40s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.31s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (5.306879568s)
--- PASS: TestJSONOutput/stop/Command (5.31s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.584462ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d20eeff7-5dc6-404c-8a9b-9753b1b0a1c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d948a484-cf5f-4827-9ef8-db896016502a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19662"}}
	{"specversion":"1.0","id":"de6fc8d2-d271-44ce-a281-45b016a5e134","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fa5ad9c9-a52f-41cf-a3ff-c757b459bbfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19662-10973/kubeconfig"}}
	{"specversion":"1.0","id":"17cb7df3-aed1-487e-879b-6fa319d3f448","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-10973/.minikube"}}
	{"specversion":"1.0","id":"427c17d4-48a6-48f5-8c7f-685ca0baf57c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b7e20752-0b00-4ae7-bc7a-2f376b8e7938","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b9b43a39-faa0-40e6-b4a6-f270758688ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (34.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.597061567s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.844622872s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.203495538s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (34.25s)

                                                
                                    
x
+
TestPause/serial/Start (28.78s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (28.776783039s)
--- PASS: TestPause/serial/Start (28.78s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.04s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (34.040793117s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.04s)

                                                
                                    
x
+
TestPause/serial/Pause (0.48s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.48s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (130.095725ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.38s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.38s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.52s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.52s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.59s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.58870332s)
--- PASS: TestPause/serial/DeletePaused (1.59s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (73.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1823241782 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1823241782 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (31.692407087s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (37.163034098s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.029078088s)
--- PASS: TestRunningBinaryUpgrade (73.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (49.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.204267275 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.204267275 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.141200941s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.204267275 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.204267275 -p minikube stop: (23.718064836s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (11.295200946s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (49.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (309.69s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (30.578384981s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.800239712s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (67.332839ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m18.640861672s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (65.064005ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-10973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-10973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (17.217905975s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.261886693s)
--- PASS: TestKubernetesUpgrade (309.69s)

                                                
                                    

Test skip (56/168)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
38 TestAddons/parallel/Olm 0
42 TestAddons/parallel/LocalPath 0
46 TestCertOptions 0
48 TestDockerFlags 0
49 TestForceSystemdFlag 0
50 TestForceSystemdEnv 0
51 TestDockerEnvContainerd 0
52 TestKVMDriverInstallOrUpdate 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
55 TestErrorSpam 0
64 TestFunctional/serial/CacheCmd 0
78 TestFunctional/parallel/MountCmd 0
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
102 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
103 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
105 TestFunctional/parallel/SSHCmd 0
106 TestFunctional/parallel/CpCmd 0
108 TestFunctional/parallel/FileSync 0
109 TestFunctional/parallel/CertSync 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/ImageCommands 0
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0
126 TestGvisorAddon 0
127 TestMultiControlPlane 0
135 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
162 TestKicCustomNetwork 0
163 TestKicExistingNetwork 0
164 TestKicCustomSubnet 0
165 TestKicStaticIP 0
168 TestMountStart 0
169 TestMultiNode 0
170 TestNetworkPlugins 0
171 TestNoKubernetes 0
172 TestChangeNoneUser 0
183 TestPreload 0
184 TestScheduledStopWindows 0
185 TestScheduledStopUnix 0
186 TestSkaffold 0
189 TestStartStop/group/old-k8s-version 0.12
190 TestStartStop/group/newest-cni 0.12
191 TestStartStop/group/default-k8s-diff-port 0.13
192 TestStartStop/group/no-preload 0.12
193 TestStartStop/group/disable-driver-mounts 0.13
194 TestStartStop/group/embed-certs 0.13
195 TestInsufficientStorage 0
202 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:198: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:978: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.12s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.13s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard