Test Report: none_Linux 19643

                    
                      17d31f5d116bbb5d9ac8f4a1c2873ea47cdfa40f:2024-09-14:36211
                    
                

Test fail (1/168)

Order failed test Duration
33 TestAddons/parallel/Registry 71.83
x
+
TestAddons/parallel/Registry (71.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.665201ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-l64nd" [a3cef9f1-3478-4b92-84d9-5e8f21c3ec9c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003861891s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bst86" [04bf6491-0898-4738-9ad3-f4f343173ece] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003221686s
addons_test.go:342: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.09187854s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/14 16:55:58 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:38793               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:44 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:46 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 14 Sep 24 16:46 UTC | 14 Sep 24 16:46 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 14 Sep 24 16:55 UTC | 14 Sep 24 16:55 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 16:44:22
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 16:44:22.355989   19617 out.go:345] Setting OutFile to fd 1 ...
	I0914 16:44:22.356188   19617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:44:22.356198   19617 out.go:358] Setting ErrFile to fd 2...
	I0914 16:44:22.356203   19617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:44:22.356375   19617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8655/.minikube/bin
	I0914 16:44:22.357017   19617 out.go:352] Setting JSON to false
	I0914 16:44:22.357896   19617 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1611,"bootTime":1726330651,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 16:44:22.357984   19617 start.go:139] virtualization: kvm guest
	I0914 16:44:22.360403   19617 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0914 16:44:22.361874   19617 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19643-8655/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 16:44:22.361915   19617 notify.go:220] Checking for updates...
	I0914 16:44:22.361918   19617 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 16:44:22.363437   19617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 16:44:22.364706   19617 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8655/kubeconfig
	I0914 16:44:22.366031   19617 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8655/.minikube
	I0914 16:44:22.367318   19617 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 16:44:22.368739   19617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 16:44:22.370143   19617 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 16:44:22.380314   19617 out.go:177] * Using the none driver based on user configuration
	I0914 16:44:22.381550   19617 start.go:297] selected driver: none
	I0914 16:44:22.381563   19617 start.go:901] validating driver "none" against <nil>
	I0914 16:44:22.381577   19617 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 16:44:22.381635   19617 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0914 16:44:22.381987   19617 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0914 16:44:22.382514   19617 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 16:44:22.382820   19617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 16:44:22.382856   19617 cni.go:84] Creating CNI manager for ""
	I0914 16:44:22.382928   19617 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 16:44:22.382943   19617 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 16:44:22.383010   19617 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 16:44:22.384512   19617 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0914 16:44:22.386117   19617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/config.json ...
	I0914 16:44:22.386158   19617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/config.json: {Name:mk68344fb9bcbfd8bf53d7b2e3abb227c2bdc645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:22.386324   19617 start.go:360] acquireMachinesLock for minikube: {Name:mkb09dfcf1fbcf87c40089c01b47477d7897a09e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 16:44:22.386375   19617 start.go:364] duration metric: took 34.967µs to acquireMachinesLock for "minikube"
	I0914 16:44:22.386393   19617 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 16:44:22.386460   19617 start.go:125] createHost starting for "" (driver="none")
	I0914 16:44:22.388137   19617 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0914 16:44:22.389436   19617 exec_runner.go:51] Run: systemctl --version
	I0914 16:44:22.391941   19617 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0914 16:44:22.391977   19617 client.go:168] LocalClient.Create starting
	I0914 16:44:22.392042   19617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8655/.minikube/certs/ca.pem
	I0914 16:44:22.392088   19617 main.go:141] libmachine: Decoding PEM data...
	I0914 16:44:22.392109   19617 main.go:141] libmachine: Parsing certificate...
	I0914 16:44:22.392177   19617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19643-8655/.minikube/certs/cert.pem
	I0914 16:44:22.392212   19617 main.go:141] libmachine: Decoding PEM data...
	I0914 16:44:22.392229   19617 main.go:141] libmachine: Parsing certificate...
	I0914 16:44:22.392553   19617 client.go:171] duration metric: took 568.125µs to LocalClient.Create
	I0914 16:44:22.392581   19617 start.go:167] duration metric: took 641.381µs to libmachine.API.Create "minikube"
	I0914 16:44:22.392589   19617 start.go:293] postStartSetup for "minikube" (driver="none")
	I0914 16:44:22.392630   19617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 16:44:22.392680   19617 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 16:44:22.402742   19617 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 16:44:22.402768   19617 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 16:44:22.402780   19617 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 16:44:22.405124   19617 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0914 16:44:22.406743   19617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8655/.minikube/addons for local assets ...
	I0914 16:44:22.406820   19617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-8655/.minikube/files for local assets ...
	I0914 16:44:22.406843   19617 start.go:296] duration metric: took 14.248686ms for postStartSetup
	I0914 16:44:22.407544   19617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/config.json ...
	I0914 16:44:22.407677   19617 start.go:128] duration metric: took 21.207956ms to createHost
	I0914 16:44:22.407690   19617 start.go:83] releasing machines lock for "minikube", held for 21.305114ms
	I0914 16:44:22.408018   19617 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 16:44:22.408132   19617 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0914 16:44:22.411223   19617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 16:44:22.411289   19617 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 16:44:22.421088   19617 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 16:44:22.421124   19617 start.go:495] detecting cgroup driver to use...
	I0914 16:44:22.421159   19617 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0914 16:44:22.421290   19617 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 16:44:22.440365   19617 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0914 16:44:22.449929   19617 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 16:44:22.461181   19617 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 16:44:22.461232   19617 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 16:44:22.470612   19617 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 16:44:22.479840   19617 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 16:44:22.489304   19617 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 16:44:22.498480   19617 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 16:44:22.507037   19617 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 16:44:22.517901   19617 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0914 16:44:22.526592   19617 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0914 16:44:22.535350   19617 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 16:44:22.542768   19617 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 16:44:22.551405   19617 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0914 16:44:22.768261   19617 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0914 16:44:22.838359   19617 start.go:495] detecting cgroup driver to use...
	I0914 16:44:22.838403   19617 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0914 16:44:22.838515   19617 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 16:44:22.859121   19617 exec_runner.go:51] Run: which cri-dockerd
	I0914 16:44:22.860064   19617 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 16:44:22.868879   19617 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0914 16:44:22.868899   19617 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0914 16:44:22.868936   19617 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0914 16:44:22.877770   19617 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0914 16:44:22.877916   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3472160145 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0914 16:44:22.887560   19617 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0914 16:44:23.115664   19617 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0914 16:44:23.333486   19617 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 16:44:23.333598   19617 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0914 16:44:23.333609   19617 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0914 16:44:23.333645   19617 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0914 16:44:23.342491   19617 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0914 16:44:23.342644   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube236920944 /etc/docker/daemon.json
	I0914 16:44:23.351516   19617 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0914 16:44:23.550708   19617 exec_runner.go:51] Run: sudo systemctl restart docker
	I0914 16:44:23.856776   19617 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0914 16:44:23.867920   19617 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0914 16:44:23.884606   19617 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 16:44:23.895276   19617 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0914 16:44:24.117722   19617 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0914 16:44:24.332592   19617 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0914 16:44:24.533119   19617 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0914 16:44:24.546847   19617 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 16:44:24.558602   19617 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0914 16:44:24.779562   19617 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0914 16:44:24.849138   19617 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 16:44:24.849200   19617 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0914 16:44:24.850614   19617 start.go:563] Will wait 60s for crictl version
	I0914 16:44:24.850657   19617 exec_runner.go:51] Run: which crictl
	I0914 16:44:24.851681   19617 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0914 16:44:24.881176   19617 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0914 16:44:24.881262   19617 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0914 16:44:24.902493   19617 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0914 16:44:24.926069   19617 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0914 16:44:24.926155   19617 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0914 16:44:24.929076   19617 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0914 16:44:24.930372   19617 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 16:44:24.930492   19617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 16:44:24.930500   19617 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0914 16:44:24.930583   19617 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0914 16:44:24.930627   19617 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0914 16:44:24.978948   19617 cni.go:84] Creating CNI manager for ""
	I0914 16:44:24.978977   19617 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 16:44:24.978990   19617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 16:44:24.979015   19617 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 16:44:24.979192   19617 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 16:44:24.979258   19617 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 16:44:24.987701   19617 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0914 16:44:24.987760   19617 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0914 16:44:24.996239   19617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0914 16:44:24.996287   19617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8655/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0914 16:44:24.996297   19617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0914 16:44:24.996338   19617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8655/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0914 16:44:24.996338   19617 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0914 16:44:24.996473   19617 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0914 16:44:25.008822   19617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8655/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0914 16:44:25.043227   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2215681919 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0914 16:44:25.064652   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2150158886 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0914 16:44:25.089986   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2275504800 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0914 16:44:25.154719   19617 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 16:44:25.163591   19617 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0914 16:44:25.163613   19617 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0914 16:44:25.163645   19617 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0914 16:44:25.170922   19617 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0914 16:44:25.171050   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1825804699 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0914 16:44:25.178545   19617 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0914 16:44:25.178568   19617 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0914 16:44:25.178610   19617 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0914 16:44:25.185743   19617 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 16:44:25.185896   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube276333885 /lib/systemd/system/kubelet.service
	I0914 16:44:25.194299   19617 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0914 16:44:25.194405   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2092928647 /var/tmp/minikube/kubeadm.yaml.new
	I0914 16:44:25.202107   19617 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0914 16:44:25.203327   19617 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0914 16:44:25.414361   19617 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0914 16:44:25.428393   19617 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube for IP: 10.138.0.48
	I0914 16:44:25.428414   19617 certs.go:194] generating shared ca certs ...
	I0914 16:44:25.428431   19617 certs.go:226] acquiring lock for ca certs: {Name:mkabf7c81d289a54fc95c11a00764edad607319e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:25.428556   19617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-8655/.minikube/ca.key
	I0914 16:44:25.428599   19617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-8655/.minikube/proxy-client-ca.key
	I0914 16:44:25.428608   19617 certs.go:256] generating profile certs ...
	I0914 16:44:25.428663   19617 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/client.key
	I0914 16:44:25.428681   19617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/client.crt with IP's: []
	I0914 16:44:25.589551   19617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/client.crt ...
	I0914 16:44:25.589579   19617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/client.crt: {Name:mkeeb9414346748d47fb23541127ed7cf577a83e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:25.589716   19617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/client.key ...
	I0914 16:44:25.589727   19617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/client.key: {Name:mk220aae8e73b249f1a7e937d3b67ff3915268a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:25.589787   19617 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0914 16:44:25.589802   19617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0914 16:44:25.662472   19617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0914 16:44:25.662497   19617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk13fb041e76b644d43cc0ea96b1bc51bce6b99c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:25.662615   19617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0914 16:44:25.662625   19617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mke95bf8d8bbadc1aa004a82c9cdce67b4e64140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:25.662674   19617 certs.go:381] copying /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/apiserver.crt
	I0914 16:44:25.662744   19617 certs.go:385] copying /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/apiserver.key
	I0914 16:44:25.662796   19617 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/proxy-client.key
	I0914 16:44:25.662809   19617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0914 16:44:25.773648   19617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/proxy-client.crt ...
	I0914 16:44:25.773684   19617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/proxy-client.crt: {Name:mk3fe32c4475ed6265f9cd57c7b94522d0d7ba90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:25.773822   19617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/proxy-client.key ...
	I0914 16:44:25.773832   19617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/proxy-client.key: {Name:mkc6eb3c11203dafaa9ada36ad43ce1fa8b2a819 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:25.773982   19617 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8655/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 16:44:25.774020   19617 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8655/.minikube/certs/ca.pem (1082 bytes)
	I0914 16:44:25.774042   19617 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8655/.minikube/certs/cert.pem (1123 bytes)
	I0914 16:44:25.774067   19617 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-8655/.minikube/certs/key.pem (1679 bytes)
	I0914 16:44:25.774608   19617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8655/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 16:44:25.774719   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube104447655 /var/lib/minikube/certs/ca.crt
	I0914 16:44:25.785001   19617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8655/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 16:44:25.785152   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1634766665 /var/lib/minikube/certs/ca.key
	I0914 16:44:25.792850   19617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8655/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 16:44:25.792982   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2116259040 /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 16:44:25.801281   19617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8655/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 16:44:25.801397   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube51066552 /var/lib/minikube/certs/proxy-client-ca.key
	I0914 16:44:25.810096   19617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0914 16:44:25.810227   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1760179593 /var/lib/minikube/certs/apiserver.crt
	I0914 16:44:25.818232   19617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 16:44:25.818357   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1427300788 /var/lib/minikube/certs/apiserver.key
	I0914 16:44:25.826009   19617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 16:44:25.826128   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2236961880 /var/lib/minikube/certs/proxy-client.crt
	I0914 16:44:25.833827   19617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 16:44:25.833944   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2939352923 /var/lib/minikube/certs/proxy-client.key
	I0914 16:44:25.842008   19617 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0914 16:44:25.842032   19617 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:25.842064   19617 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:25.849337   19617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-8655/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 16:44:25.849492   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4213054633 /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:25.857612   19617 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 16:44:25.857738   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3974229343 /var/lib/minikube/kubeconfig
	I0914 16:44:25.866551   19617 exec_runner.go:51] Run: openssl version
	I0914 16:44:25.869414   19617 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 16:44:25.878768   19617 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:25.880120   19617 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:25.880165   19617 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:25.882914   19617 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 16:44:25.890643   19617 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 16:44:25.891694   19617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 16:44:25.891801   19617 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 16:44:25.891896   19617 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 16:44:25.907302   19617 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 16:44:25.916148   19617 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 16:44:25.924670   19617 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0914 16:44:25.945400   19617 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 16:44:25.954027   19617 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 16:44:25.954053   19617 kubeadm.go:157] found existing configuration files:
	
	I0914 16:44:25.954099   19617 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 16:44:25.961882   19617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 16:44:25.961935   19617 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 16:44:25.969420   19617 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 16:44:25.977539   19617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 16:44:25.977585   19617 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 16:44:25.985141   19617 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 16:44:25.992957   19617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 16:44:25.993010   19617 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 16:44:26.001035   19617 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 16:44:26.008992   19617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 16:44:26.009047   19617 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 16:44:26.016578   19617 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0914 16:44:26.049813   19617 kubeadm.go:310] W0914 16:44:26.049689   20507 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 16:44:26.050364   19617 kubeadm.go:310] W0914 16:44:26.050316   20507 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 16:44:26.052283   19617 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 16:44:26.052310   19617 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 16:44:26.150217   19617 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 16:44:26.150331   19617 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 16:44:26.150351   19617 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 16:44:26.150357   19617 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 16:44:26.162768   19617 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 16:44:26.165602   19617 out.go:235]   - Generating certificates and keys ...
	I0914 16:44:26.165662   19617 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 16:44:26.165683   19617 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 16:44:26.437106   19617 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 16:44:26.556457   19617 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 16:44:26.743345   19617 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 16:44:26.895221   19617 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 16:44:27.090424   19617 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 16:44:27.090528   19617 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0914 16:44:27.181932   19617 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 16:44:27.181992   19617 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0914 16:44:27.341199   19617 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 16:44:27.485006   19617 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 16:44:27.547620   19617 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 16:44:27.547795   19617 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 16:44:27.677505   19617 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 16:44:27.883265   19617 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 16:44:28.003184   19617 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 16:44:28.165055   19617 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 16:44:28.232427   19617 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 16:44:28.232994   19617 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 16:44:28.236346   19617 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 16:44:28.238289   19617 out.go:235]   - Booting up control plane ...
	I0914 16:44:28.238319   19617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 16:44:28.238334   19617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 16:44:28.238769   19617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 16:44:28.260482   19617 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 16:44:28.265023   19617 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 16:44:28.265056   19617 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 16:44:28.493898   19617 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 16:44:28.493921   19617 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 16:44:28.995539   19617 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.613507ms
	I0914 16:44:28.995560   19617 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 16:44:33.996862   19617 kubeadm.go:310] [api-check] The API server is healthy after 5.001292632s
	I0914 16:44:34.008672   19617 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 16:44:34.018511   19617 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 16:44:34.038306   19617 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 16:44:34.038332   19617 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 16:44:34.045994   19617 kubeadm.go:310] [bootstrap-token] Using token: ft2von.kc52wnpvqyfsjgiu
	I0914 16:44:34.047466   19617 out.go:235]   - Configuring RBAC rules ...
	I0914 16:44:34.047502   19617 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 16:44:34.052998   19617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 16:44:34.059283   19617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 16:44:34.061991   19617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 16:44:34.065735   19617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 16:44:34.068220   19617 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 16:44:34.402850   19617 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 16:44:34.823500   19617 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 16:44:35.402519   19617 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 16:44:35.403426   19617 kubeadm.go:310] 
	I0914 16:44:35.403446   19617 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 16:44:35.403450   19617 kubeadm.go:310] 
	I0914 16:44:35.403455   19617 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 16:44:35.403458   19617 kubeadm.go:310] 
	I0914 16:44:35.403462   19617 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 16:44:35.403466   19617 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 16:44:35.403475   19617 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 16:44:35.403479   19617 kubeadm.go:310] 
	I0914 16:44:35.403482   19617 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 16:44:35.403486   19617 kubeadm.go:310] 
	I0914 16:44:35.403490   19617 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 16:44:35.403494   19617 kubeadm.go:310] 
	I0914 16:44:35.403498   19617 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 16:44:35.403501   19617 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 16:44:35.403505   19617 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 16:44:35.403509   19617 kubeadm.go:310] 
	I0914 16:44:35.403514   19617 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 16:44:35.403518   19617 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 16:44:35.403530   19617 kubeadm.go:310] 
	I0914 16:44:35.403534   19617 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ft2von.kc52wnpvqyfsjgiu \
	I0914 16:44:35.403539   19617 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:68961a82ffe34fae8232e0c205e606d553937e4905f80aa3a46efb6dab83a5f6 \
	I0914 16:44:35.403543   19617 kubeadm.go:310] 	--control-plane 
	I0914 16:44:35.403548   19617 kubeadm.go:310] 
	I0914 16:44:35.403552   19617 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 16:44:35.403557   19617 kubeadm.go:310] 
	I0914 16:44:35.403561   19617 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ft2von.kc52wnpvqyfsjgiu \
	I0914 16:44:35.403568   19617 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:68961a82ffe34fae8232e0c205e606d553937e4905f80aa3a46efb6dab83a5f6 
	I0914 16:44:35.406370   19617 cni.go:84] Creating CNI manager for ""
	I0914 16:44:35.406403   19617 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 16:44:35.408375   19617 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 16:44:35.409753   19617 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0914 16:44:35.420732   19617 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 16:44:35.420863   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1350110264 /etc/cni/net.d/1-k8s.conflist
	I0914 16:44:35.429805   19617 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 16:44:35.429866   19617 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:35.429895   19617 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_14T16_44_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0914 16:44:35.439434   19617 ops.go:34] apiserver oom_adj: -16
	I0914 16:44:35.503182   19617 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:36.004061   19617 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:36.504026   19617 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:37.003608   19617 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:37.504088   19617 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:38.003499   19617 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:38.503337   19617 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:39.004277   19617 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:39.503714   19617 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:39.573532   19617 kubeadm.go:1113] duration metric: took 4.143716502s to wait for elevateKubeSystemPrivileges
	I0914 16:44:39.573577   19617 kubeadm.go:394] duration metric: took 13.681848287s to StartCluster
	I0914 16:44:39.573603   19617 settings.go:142] acquiring lock: {Name:mk531f704d9c6645f5bc3e440e93db4755990cc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:39.573677   19617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-8655/kubeconfig
	I0914 16:44:39.574295   19617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-8655/kubeconfig: {Name:mk692f4e2eab6e59f9c9caa3797b7dc7c873ae1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:39.574546   19617 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 16:44:39.574633   19617 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0914 16:44:39.574794   19617 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 16:44:39.574805   19617 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0914 16:44:39.574818   19617 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0914 16:44:39.574824   19617 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0914 16:44:39.574834   19617 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0914 16:44:39.574835   19617 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0914 16:44:39.574837   19617 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0914 16:44:39.574798   19617 addons.go:69] Setting yakd=true in profile "minikube"
	I0914 16:44:39.574860   19617 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0914 16:44:39.574810   19617 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0914 16:44:39.574886   19617 addons.go:234] Setting addon yakd=true in "minikube"
	I0914 16:44:39.574888   19617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0914 16:44:39.574893   19617 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0914 16:44:39.574896   19617 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0914 16:44:39.574897   19617 addons.go:69] Setting registry=true in profile "minikube"
	I0914 16:44:39.574911   19617 addons.go:234] Setting addon registry=true in "minikube"
	I0914 16:44:39.574913   19617 host.go:66] Checking if "minikube" exists ...
	I0914 16:44:39.574916   19617 host.go:66] Checking if "minikube" exists ...
	I0914 16:44:39.574916   19617 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0914 16:44:39.574937   19617 host.go:66] Checking if "minikube" exists ...
	I0914 16:44:39.574863   19617 host.go:66] Checking if "minikube" exists ...
	I0914 16:44:39.574945   19617 addons.go:69] Setting volcano=true in profile "minikube"
	I0914 16:44:39.574956   19617 addons.go:234] Setting addon volcano=true in "minikube"
	I0914 16:44:39.574973   19617 host.go:66] Checking if "minikube" exists ...
	I0914 16:44:39.575050   19617 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0914 16:44:39.575068   19617 mustload.go:65] Loading cluster: minikube
	I0914 16:44:39.574897   19617 host.go:66] Checking if "minikube" exists ...
	I0914 16:44:39.574857   19617 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0914 16:44:39.575650   19617 host.go:66] Checking if "minikube" exists ...
	I0914 16:44:39.575839   19617 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0914 16:44:39.575858   19617 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0914 16:44:39.575882   19617 host.go:66] Checking if "minikube" exists ...
	I0914 16:44:39.575914   19617 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 16:44:39.576328   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.576346   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.576381   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.576467   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.576476   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.576489   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.576504   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.576522   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.576539   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.576561   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.576572   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.576600   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.576623   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.576637   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.576684   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.576977   19617 out.go:177] * Configuring local host environment ...
	I0914 16:44:39.574815   19617 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0914 16:44:39.574850   19617 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0914 16:44:39.577608   19617 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0914 16:44:39.577708   19617 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0914 16:44:39.577759   19617 host.go:66] Checking if "minikube" exists ...
	I0914 16:44:39.574937   19617 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0914 16:44:39.577935   19617 host.go:66] Checking if "minikube" exists ...
	I0914 16:44:39.578407   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.578422   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.578460   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.578587   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.578614   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.578643   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.579184   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.579194   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.579216   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.579232   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.579272   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.579320   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.579330   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.579347   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.579221   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.574920   19617 host.go:66] Checking if "minikube" exists ...
	W0914 16:44:39.579663   19617 out.go:270] * 
	W0914 16:44:39.579686   19617 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0914 16:44:39.579694   19617 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0914 16:44:39.579707   19617 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0914 16:44:39.579713   19617 out.go:270] * 
	W0914 16:44:39.579974   19617 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0914 16:44:39.579998   19617 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0914 16:44:39.580004   19617 out.go:270] * 
	W0914 16:44:39.580021   19617 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0914 16:44:39.580029   19617 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0914 16:44:39.580035   19617 out.go:270] * 
	W0914 16:44:39.580040   19617 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0914 16:44:39.580062   19617 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 16:44:39.581412   19617 out.go:177] * Verifying Kubernetes components...
	I0914 16:44:39.583172   19617 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0914 16:44:39.593051   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.611818   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.611859   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.611914   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.612027   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.612050   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.612078   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.612752   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.612774   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.612806   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.615480   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.615522   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.612082   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.641946   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.641986   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.642160   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.642392   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.644817   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.644941   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.645296   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.646194   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.655620   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.665962   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.666040   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.667242   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.667313   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.671363   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.671397   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.672432   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.672507   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.682181   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.682293   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.682314   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.683365   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.684351   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.685047   19617 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0914 16:44:39.685091   19617 host.go:66] Checking if "minikube" exists ...
	I0914 16:44:39.685777   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.685792   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.685824   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.686627   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.687432   19617 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0914 16:44:39.687474   19617 host.go:66] Checking if "minikube" exists ...
	I0914 16:44:39.688123   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:39.688139   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:39.688172   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:39.688403   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.692226   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.693402   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.698942   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.699010   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.703643   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.710895   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.710965   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.714945   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.714971   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.715420   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.715442   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.717513   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.717565   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.717797   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.718658   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.718703   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.719794   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.721650   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.721693   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.722333   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.722487   19617 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0914 16:44:39.723038   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.723099   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.724702   19617 out.go:177]   - Using image docker.io/registry:2.8.3
	I0914 16:44:39.724707   19617 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0914 16:44:39.724743   19617 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0914 16:44:39.724887   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1978092305 /etc/kubernetes/addons/yakd-ns.yaml
	I0914 16:44:39.725103   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.725150   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.726608   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.727345   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.727503   19617 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0914 16:44:39.728973   19617 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0914 16:44:39.729005   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0914 16:44:39.729145   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube198539881 /etc/kubernetes/addons/registry-rc.yaml
	I0914 16:44:39.730705   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.730728   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.732197   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.732252   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.733798   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.733974   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.736169   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.737083   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.737108   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.738126   19617 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0914 16:44:39.739614   19617 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0914 16:44:39.739647   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0914 16:44:39.739769   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1835641774 /etc/kubernetes/addons/deployment.yaml
	I0914 16:44:39.741365   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.741428   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.741847   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.741869   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.742171   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.742457   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.746022   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.746040   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.746554   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.747985   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.748003   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.749727   19617 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0914 16:44:39.749790   19617 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0914 16:44:39.749809   19617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 16:44:39.750729   19617 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0914 16:44:39.750757   19617 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0914 16:44:39.750897   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2855675519 /etc/kubernetes/addons/registry-svc.yaml
	I0914 16:44:39.750547   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.751157   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.751308   19617 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0914 16:44:39.751385   19617 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0914 16:44:39.751454   19617 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0914 16:44:39.751477   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0914 16:44:39.751817   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3951659927 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0914 16:44:39.752088   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube730978047 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0914 16:44:39.752401   19617 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 16:44:39.752417   19617 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0914 16:44:39.752423   19617 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 16:44:39.752461   19617 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 16:44:39.753104   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.753118   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.753318   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.753334   19617 host.go:66] Checking if "minikube" exists ...
	I0914 16:44:39.753671   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.753932   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.754353   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.754535   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.754879   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:39.755135   19617 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0914 16:44:39.755157   19617 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0914 16:44:39.755250   19617 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0914 16:44:39.755721   19617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 16:44:39.757750   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.758502   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1065323442 /etc/kubernetes/addons/yakd-sa.yaml
	I0914 16:44:39.759821   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.760320   19617 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0914 16:44:39.761127   19617 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0914 16:44:39.763190   19617 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0914 16:44:39.763246   19617 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0914 16:44:39.763265   19617 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0914 16:44:39.763382   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1382644313 /etc/kubernetes/addons/ig-namespace.yaml
	I0914 16:44:39.766597   19617 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0914 16:44:39.768836   19617 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0914 16:44:39.768875   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0914 16:44:39.769409   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2335675413 /etc/kubernetes/addons/volcano-deployment.yaml
	I0914 16:44:39.779774   19617 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 16:44:39.779821   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0914 16:44:39.779965   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube692301555 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 16:44:39.782371   19617 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0914 16:44:39.783505   19617 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0914 16:44:39.783535   19617 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0914 16:44:39.783652   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3404443176 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0914 16:44:39.785762   19617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 16:44:39.787424   19617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 16:44:39.789238   19617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 16:44:39.789768   19617 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 16:44:39.792879   19617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 16:44:39.794360   19617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 16:44:39.796003   19617 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0914 16:44:39.796030   19617 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0914 16:44:39.796158   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3676631226 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0914 16:44:39.797597   19617 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0914 16:44:39.797622   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0914 16:44:39.797696   19617 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 16:44:39.797728   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube913341377 /etc/kubernetes/addons/registry-proxy.yaml
	I0914 16:44:39.798199   19617 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0914 16:44:39.798226   19617 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0914 16:44:39.799576   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2420065541 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0914 16:44:39.800502   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.800552   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.800687   19617 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 16:44:39.802087   19617 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 16:44:39.802116   19617 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 16:44:39.802752   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2319154005 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 16:44:39.803563   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:39.803607   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:39.811062   19617 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0914 16:44:39.811096   19617 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0914 16:44:39.811175   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 16:44:39.811223   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4082753145 /etc/kubernetes/addons/yakd-crb.yaml
	I0914 16:44:39.811618   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1224331741 /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 16:44:39.820212   19617 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0914 16:44:39.820212   19617 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 16:44:39.820446   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.820463   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.829905   19617 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0914 16:44:39.829945   19617 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0914 16:44:39.830068   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube693549430 /etc/kubernetes/addons/yakd-svc.yaml
	I0914 16:44:39.831131   19617 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 16:44:39.831147   19617 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0914 16:44:39.831166   19617 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 16:44:39.831170   19617 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0914 16:44:39.831296   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1038722316 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 16:44:39.831298   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2650258606 /etc/kubernetes/addons/ig-role.yaml
	I0914 16:44:39.831426   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.831444   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.831549   19617 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0914 16:44:39.832125   19617 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0914 16:44:39.832152   19617 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0914 16:44:39.832256   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1046786396 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0914 16:44:39.832898   19617 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0914 16:44:39.832922   19617 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0914 16:44:39.833053   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1490045399 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0914 16:44:39.833593   19617 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 16:44:39.835887   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:39.835911   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:39.836412   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.836855   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.840519   19617 out.go:177]   - Using image docker.io/busybox:stable
	I0914 16:44:39.841902   19617 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0914 16:44:39.841955   19617 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0914 16:44:39.841978   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0914 16:44:39.841984   19617 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0914 16:44:39.842121   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3918501919 /etc/kubernetes/addons/yakd-dp.yaml
	I0914 16:44:39.844902   19617 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 16:44:39.844939   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0914 16:44:39.845082   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2015240592 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 16:44:39.845245   19617 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 16:44:39.845272   19617 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 16:44:39.845380   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2259872248 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 16:44:39.848025   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:39.848077   19617 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 16:44:39.848092   19617 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0914 16:44:39.848098   19617 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0914 16:44:39.848135   19617 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0914 16:44:39.867963   19617 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0914 16:44:39.880461   19617 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 16:44:39.880473   19617 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0914 16:44:39.880499   19617 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 16:44:39.880502   19617 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0914 16:44:39.880649   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2241975002 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 16:44:39.880725   19617 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0914 16:44:39.880745   19617 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0914 16:44:39.880867   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3877240582 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0914 16:44:39.880904   19617 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 16:44:39.880650   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube857835425 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0914 16:44:39.884514   19617 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0914 16:44:39.946244   19617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 16:44:39.946266   19617 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 16:44:39.946287   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 16:44:39.946293   19617 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 16:44:39.946440   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2896298260 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 16:44:39.946451   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3001985595 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 16:44:39.948430   19617 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 16:44:39.948575   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3415037225 /etc/kubernetes/addons/storageclass.yaml
	I0914 16:44:39.961990   19617 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0914 16:44:39.962038   19617 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0914 16:44:39.962198   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3659503240 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0914 16:44:39.967064   19617 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0914 16:44:39.967101   19617 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0914 16:44:39.967238   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1277098522 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0914 16:44:39.967390   19617 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 16:44:39.967414   19617 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 16:44:39.967535   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2921921619 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 16:44:39.971526   19617 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 16:44:39.979179   19617 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0914 16:44:39.979223   19617 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0914 16:44:39.979804   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3346990009 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0914 16:44:39.983187   19617 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:44:39.983217   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0914 16:44:39.983342   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2157265874 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:44:39.989773   19617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 16:44:39.989808   19617 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 16:44:39.989936   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3536794669 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 16:44:39.993462   19617 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 16:44:39.993487   19617 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 16:44:39.993591   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube410740837 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 16:44:40.000613   19617 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0914 16:44:40.000651   19617 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0914 16:44:40.000786   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3834533090 /etc/kubernetes/addons/ig-crd.yaml
	I0914 16:44:40.007589   19617 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:44:40.059369   19617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 16:44:40.059407   19617 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 16:44:40.059549   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1950829112 /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 16:44:40.066491   19617 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0914 16:44:40.078449   19617 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 16:44:40.078481   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0914 16:44:40.078610   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1505567655 /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 16:44:40.103417   19617 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 16:44:40.110597   19617 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 16:44:40.110626   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 16:44:40.110745   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3608516734 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 16:44:40.127153   19617 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 16:44:40.136970   19617 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0914 16:44:40.140650   19617 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0914 16:44:40.140679   19617 node_ready.go:38] duration metric: took 3.667016ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0914 16:44:40.140694   19617 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 16:44:40.149930   19617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ck9l6" in "kube-system" namespace to be "Ready" ...
	I0914 16:44:40.231541   19617 addons.go:475] Verifying addon registry=true in "minikube"
	I0914 16:44:40.234728   19617 out.go:177] * Verifying registry addon...
	I0914 16:44:40.237569   19617 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 16:44:40.237602   19617 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 16:44:40.237737   19617 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 16:44:40.237747   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1488632216 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 16:44:40.255540   19617 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 16:44:40.255566   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:40.420468   19617 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 16:44:40.420507   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 16:44:40.420720   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube86225546 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 16:44:40.524663   19617 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0914 16:44:40.546449   19617 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 16:44:40.546489   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 16:44:40.546625   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2026926329 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 16:44:40.651966   19617 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 16:44:40.651996   19617 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 16:44:40.652132   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3393377841 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 16:44:40.742167   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:40.803142   19617 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 16:44:40.913001   19617 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.079367695s)
	I0914 16:44:41.043883   19617 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0914 16:44:41.158407   19617 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.27384559s)
	I0914 16:44:41.172326   19617 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.291385112s)
	I0914 16:44:41.246145   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:41.252590   19617 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.384573557s)
	I0914 16:44:41.255632   19617 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0914 16:44:41.290842   19617 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.187374556s)
	I0914 16:44:41.290927   19617 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0914 16:44:41.481342   19617 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.354126478s)
	I0914 16:44:41.646467   19617 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.638806079s)
	W0914 16:44:41.646510   19617 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 16:44:41.646550   19617 retry.go:31] will retry after 306.771913ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 16:44:41.741630   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:41.954080   19617 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:44:42.157443   19617 pod_ready.go:103] pod "coredns-7c65d6cfc9-ck9l6" in "kube-system" namespace has status "Ready":"False"
	I0914 16:44:42.248243   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:42.742495   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:42.843006   19617 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.022747164s)
	I0914 16:44:43.242969   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:43.267586   19617 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.464373839s)
	I0914 16:44:43.267629   19617 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0914 16:44:43.269288   19617 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 16:44:43.271786   19617 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 16:44:43.293591   19617 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 16:44:43.293626   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:43.742535   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:43.776669   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:44.241489   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:44.275678   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:44.655592   19617 pod_ready.go:103] pod "coredns-7c65d6cfc9-ck9l6" in "kube-system" namespace has status "Ready":"False"
	I0914 16:44:44.711741   19617 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.757600567s)
	I0914 16:44:44.742628   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:44.844516   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:45.241882   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:45.276925   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:45.655972   19617 pod_ready.go:93] pod "coredns-7c65d6cfc9-ck9l6" in "kube-system" namespace has status "Ready":"True"
	I0914 16:44:45.655994   19617 pod_ready.go:82] duration metric: took 5.506031162s for pod "coredns-7c65d6cfc9-ck9l6" in "kube-system" namespace to be "Ready" ...
	I0914 16:44:45.656004   19617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lfhvs" in "kube-system" namespace to be "Ready" ...
	I0914 16:44:45.742504   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:45.844543   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:46.291005   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:46.291311   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:46.742051   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:46.761068   19617 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 16:44:46.761244   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2208404685 /var/lib/minikube/google_application_credentials.json
	I0914 16:44:46.771374   19617 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 16:44:46.771495   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube397004928 /var/lib/minikube/google_cloud_project
	I0914 16:44:46.781268   19617 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0914 16:44:46.781316   19617 host.go:66] Checking if "minikube" exists ...
	I0914 16:44:46.781909   19617 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0914 16:44:46.781932   19617 api_server.go:166] Checking apiserver status ...
	I0914 16:44:46.781960   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:46.799875   19617 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20932/cgroup
	I0914 16:44:46.811534   19617 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9"
	I0914 16:44:46.811612   19617 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b998d64232e6b0875a98db1bcedbb2187afd5c9dbbdb129942c20c7a46188cb9/freezer.state
	I0914 16:44:46.821660   19617 api_server.go:204] freezer state: "THAWED"
	I0914 16:44:46.821699   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:46.826618   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:46.826686   19617 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 16:44:46.830133   19617 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 16:44:46.831707   19617 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0914 16:44:46.833088   19617 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 16:44:46.833131   19617 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 16:44:46.833276   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1436993776 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 16:44:46.843761   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:46.844156   19617 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 16:44:46.844191   19617 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 16:44:46.844323   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3305065995 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 16:44:46.854672   19617 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 16:44:46.854710   19617 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0914 16:44:46.854846   19617 exec_runner.go:51] Run: sudo cp -a /tmp/minikube992653460 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 16:44:46.867338   19617 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 16:44:47.243443   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:47.263491   19617 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0914 16:44:47.265250   19617 out.go:177] * Verifying gcp-auth addon...
	I0914 16:44:47.267791   19617 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 16:44:47.343873   19617 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 16:44:47.345772   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:47.662668   19617 pod_ready.go:103] pod "coredns-7c65d6cfc9-lfhvs" in "kube-system" namespace has status "Ready":"False"
	I0914 16:44:47.742288   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:47.775800   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:48.242207   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:48.341971   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:48.742403   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:48.776375   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:49.162661   19617 pod_ready.go:98] pod "coredns-7c65d6cfc9-lfhvs" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:44:48 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:44:39 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:44:39 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:44:39 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:44:39 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.48 HostIPs:[{IP:10.138.0.48}]
PodIP:10.244.0.4 PodIPs:[{IP:10.244.0.4}] StartTime:2024-09-14 16:44:39 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-14 16:44:41 +0000 UTC,FinishedAt:2024-09-14 16:44:48 +0000 UTC,ContainerID:docker://883243c197d7e2a1ef4913f9e0a16104d503c2e0a7c6c580514c230346152f46,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://883243c197d7e2a1ef4913f9e0a16104d503c2e0a7c6c580514c230346152f46 Started:0xc001b68860 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0005c7060} {Name:kube-api-access-hmct6 MountPath:/var/run/secrets/kubernetes.io/serviceaccount R
eadOnly:true RecursiveReadOnly:0xc0005c7070}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0914 16:44:49.162691   19617 pod_ready.go:82] duration metric: took 3.506679742s for pod "coredns-7c65d6cfc9-lfhvs" in "kube-system" namespace to be "Ready" ...
	E0914 16:44:49.162706   19617 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-lfhvs" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:44:48 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:44:39 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:44:39 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:44:39 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-14 16:44:39 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.
48 HostIPs:[{IP:10.138.0.48}] PodIP:10.244.0.4 PodIPs:[{IP:10.244.0.4}] StartTime:2024-09-14 16:44:39 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-14 16:44:41 +0000 UTC,FinishedAt:2024-09-14 16:44:48 +0000 UTC,ContainerID:docker://883243c197d7e2a1ef4913f9e0a16104d503c2e0a7c6c580514c230346152f46,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://883243c197d7e2a1ef4913f9e0a16104d503c2e0a7c6c580514c230346152f46 Started:0xc001b68860 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0005c7060} {Name:kube-api-access-hmct6 MountPath:/var/run/secrets/k
ubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0005c7070}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0914 16:44:49.162721   19617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0914 16:44:49.167150   19617 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0914 16:44:49.167170   19617 pod_ready.go:82] duration metric: took 4.440303ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0914 16:44:49.167181   19617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0914 16:44:49.241993   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:49.275925   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:49.742311   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:49.775780   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:50.173445   19617 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0914 16:44:50.173467   19617 pod_ready.go:82] duration metric: took 1.006278554s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0914 16:44:50.173489   19617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0914 16:44:50.177639   19617 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0914 16:44:50.177662   19617 pod_ready.go:82] duration metric: took 4.163769ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0914 16:44:50.177675   19617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hwvcp" in "kube-system" namespace to be "Ready" ...
	I0914 16:44:50.181958   19617 pod_ready.go:93] pod "kube-proxy-hwvcp" in "kube-system" namespace has status "Ready":"True"
	I0914 16:44:50.181979   19617 pod_ready.go:82] duration metric: took 4.297561ms for pod "kube-proxy-hwvcp" in "kube-system" namespace to be "Ready" ...
	I0914 16:44:50.181988   19617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0914 16:44:50.241704   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:50.343366   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:50.360796   19617 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0914 16:44:50.360822   19617 pod_ready.go:82] duration metric: took 178.826474ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0914 16:44:50.360834   19617 pod_ready.go:39] duration metric: took 10.220125804s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 16:44:50.360857   19617 api_server.go:52] waiting for apiserver process to appear ...
	I0914 16:44:50.360919   19617 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:44:50.382174   19617 api_server.go:72] duration metric: took 10.801996299s to wait for apiserver process to appear ...
	I0914 16:44:50.382201   19617 api_server.go:88] waiting for apiserver healthz status ...
	I0914 16:44:50.382231   19617 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0914 16:44:50.386248   19617 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0914 16:44:50.387242   19617 api_server.go:141] control plane version: v1.31.1
	I0914 16:44:50.387272   19617 api_server.go:131] duration metric: took 5.063335ms to wait for apiserver health ...
	I0914 16:44:50.387282   19617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 16:44:50.566704   19617 system_pods.go:59] 17 kube-system pods found
	I0914 16:44:50.566737   19617 system_pods.go:61] "coredns-7c65d6cfc9-ck9l6" [c32af496-a7e6-40c1-82ad-72416f47c3e5] Running
	I0914 16:44:50.566749   19617 system_pods.go:61] "csi-hostpath-attacher-0" [43550752-a32f-4574-9fce-ab14cb599b96] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 16:44:50.566757   19617 system_pods.go:61] "csi-hostpath-resizer-0" [fcccd001-cbfe-440c-b589-f3a0a7a379f4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 16:44:50.566771   19617 system_pods.go:61] "csi-hostpathplugin-ztdqz" [d630f48b-b0e8-4441-8683-5065e8c5d037] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 16:44:50.566782   19617 system_pods.go:61] "etcd-ubuntu-20-agent-2" [43117b1b-88bc-49e9-83ba-a4427f23f655] Running
	I0914 16:44:50.566789   19617 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [3d7b337c-a5ba-4e93-9b17-ef5f39095e88] Running
	I0914 16:44:50.566797   19617 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [57d97977-e02a-4870-9dc9-2d6fe5ab9f33] Running
	I0914 16:44:50.566802   19617 system_pods.go:61] "kube-proxy-hwvcp" [0edac814-dfe3-4e4a-9abb-284e9049a2e5] Running
	I0914 16:44:50.566809   19617 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [023202d6-345b-47d5-b1d1-011ee7eec174] Running
	I0914 16:44:50.566817   19617 system_pods.go:61] "metrics-server-84c5f94fbc-w6gkb" [a30809cd-34d2-40f2-bc91-25cf59d4d63f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 16:44:50.566830   19617 system_pods.go:61] "nvidia-device-plugin-daemonset-x8gpg" [4cdaa783-c778-4569-b473-095114459f82] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0914 16:44:50.566840   19617 system_pods.go:61] "registry-66c9cd494c-l64nd" [a3cef9f1-3478-4b92-84d9-5e8f21c3ec9c] Running
	I0914 16:44:50.566849   19617 system_pods.go:61] "registry-proxy-bst86" [04bf6491-0898-4738-9ad3-f4f343173ece] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 16:44:50.566882   19617 system_pods.go:61] "snapshot-controller-56fcc65765-9qxmm" [8f5f4a44-ba25-4c7d-aba3-39b8b84da617] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:44:50.566895   19617 system_pods.go:61] "snapshot-controller-56fcc65765-m9559" [76ef6c37-0939-41e3-9e1a-f3786abf611e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:44:50.566902   19617 system_pods.go:61] "storage-provisioner" [52d0c0ff-7df8-42a3-8bcb-6dad246ef4df] Running
	I0914 16:44:50.566914   19617 system_pods.go:61] "tiller-deploy-b48cc5f79-fbdhl" [715c342f-6073-4f83-840f-a4843a421dc6] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0914 16:44:50.566922   19617 system_pods.go:74] duration metric: took 179.632839ms to wait for pod list to return data ...
	I0914 16:44:50.566935   19617 default_sa.go:34] waiting for default service account to be created ...
	I0914 16:44:50.742096   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:50.760547   19617 default_sa.go:45] found service account: "default"
	I0914 16:44:50.760571   19617 default_sa.go:55] duration metric: took 193.629889ms for default service account to be created ...
	I0914 16:44:50.760581   19617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 16:44:50.961177   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:50.967667   19617 system_pods.go:86] 17 kube-system pods found
	I0914 16:44:50.967697   19617 system_pods.go:89] "coredns-7c65d6cfc9-ck9l6" [c32af496-a7e6-40c1-82ad-72416f47c3e5] Running
	I0914 16:44:50.967706   19617 system_pods.go:89] "csi-hostpath-attacher-0" [43550752-a32f-4574-9fce-ab14cb599b96] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 16:44:50.967713   19617 system_pods.go:89] "csi-hostpath-resizer-0" [fcccd001-cbfe-440c-b589-f3a0a7a379f4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 16:44:50.967721   19617 system_pods.go:89] "csi-hostpathplugin-ztdqz" [d630f48b-b0e8-4441-8683-5065e8c5d037] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 16:44:50.967725   19617 system_pods.go:89] "etcd-ubuntu-20-agent-2" [43117b1b-88bc-49e9-83ba-a4427f23f655] Running
	I0914 16:44:50.967729   19617 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [3d7b337c-a5ba-4e93-9b17-ef5f39095e88] Running
	I0914 16:44:50.967734   19617 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [57d97977-e02a-4870-9dc9-2d6fe5ab9f33] Running
	I0914 16:44:50.967738   19617 system_pods.go:89] "kube-proxy-hwvcp" [0edac814-dfe3-4e4a-9abb-284e9049a2e5] Running
	I0914 16:44:50.967741   19617 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [023202d6-345b-47d5-b1d1-011ee7eec174] Running
	I0914 16:44:50.967748   19617 system_pods.go:89] "metrics-server-84c5f94fbc-w6gkb" [a30809cd-34d2-40f2-bc91-25cf59d4d63f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 16:44:50.967753   19617 system_pods.go:89] "nvidia-device-plugin-daemonset-x8gpg" [4cdaa783-c778-4569-b473-095114459f82] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0914 16:44:50.967758   19617 system_pods.go:89] "registry-66c9cd494c-l64nd" [a3cef9f1-3478-4b92-84d9-5e8f21c3ec9c] Running
	I0914 16:44:50.967764   19617 system_pods.go:89] "registry-proxy-bst86" [04bf6491-0898-4738-9ad3-f4f343173ece] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 16:44:50.967769   19617 system_pods.go:89] "snapshot-controller-56fcc65765-9qxmm" [8f5f4a44-ba25-4c7d-aba3-39b8b84da617] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:44:50.967775   19617 system_pods.go:89] "snapshot-controller-56fcc65765-m9559" [76ef6c37-0939-41e3-9e1a-f3786abf611e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:44:50.967780   19617 system_pods.go:89] "storage-provisioner" [52d0c0ff-7df8-42a3-8bcb-6dad246ef4df] Running
	I0914 16:44:50.967785   19617 system_pods.go:89] "tiller-deploy-b48cc5f79-fbdhl" [715c342f-6073-4f83-840f-a4843a421dc6] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0914 16:44:50.967792   19617 system_pods.go:126] duration metric: took 207.206581ms to wait for k8s-apps to be running ...
	I0914 16:44:50.967801   19617 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 16:44:50.967842   19617 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0914 16:44:50.979493   19617 system_svc.go:56] duration metric: took 11.679261ms WaitForService to wait for kubelet
	I0914 16:44:50.979527   19617 kubeadm.go:582] duration metric: took 11.399355821s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 16:44:50.979556   19617 node_conditions.go:102] verifying NodePressure condition ...
	I0914 16:44:51.273873   19617 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0914 16:44:51.273903   19617 node_conditions.go:123] node cpu capacity is 8
	I0914 16:44:51.273918   19617 node_conditions.go:105] duration metric: took 294.355962ms to run NodePressure ...
	I0914 16:44:51.273931   19617 start.go:241] waiting for startup goroutines ...
	I0914 16:44:51.274257   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:51.275757   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:51.741688   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:51.775553   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:52.241059   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:52.275829   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:52.742185   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:52.775906   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:53.240672   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:53.276369   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:53.742217   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:53.776050   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:54.241132   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:54.275609   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:54.741358   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:54.775787   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:55.241691   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:44:55.275755   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:55.741147   19617 kapi.go:107] duration metric: took 15.503409471s to wait for kubernetes.io/minikube-addons=registry ...
	I0914 16:44:55.774843   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:56.275523   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:56.776430   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:57.275972   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:57.776407   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:58.276383   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:58.775184   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:59.276478   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:44:59.775440   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:00.275163   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:00.776553   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:01.276199   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:01.776177   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:02.276437   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:02.886781   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:03.275131   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:03.776260   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:04.275736   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:04.775911   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:05.276224   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:05.775927   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:06.276145   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:06.775381   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:07.275408   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:07.776317   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:08.276250   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:08.775878   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:09.275773   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:09.776579   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:10.276170   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:10.775838   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:11.276044   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:11.775630   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:12.275524   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:12.776630   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:13.276268   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:13.775458   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:14.276093   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:14.775781   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:15.277466   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:15.775742   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:16.276334   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:16.775923   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:17.276142   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:17.775979   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:18.276563   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:18.872884   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:19.276764   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:19.775686   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:20.275682   19617 kapi.go:107] duration metric: took 37.003894759s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 16:45:29.770815   19617 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 16:45:29.770835   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:30.271143   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:30.770610   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:31.271873   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:31.770964   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:32.270719   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:32.771125   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:33.271567   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:33.771741   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:34.270186   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:34.771209   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:35.271374   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:35.771326   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:36.271079   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:36.771477   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:37.270962   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:37.771335   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:38.271317   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:38.771052   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:39.271031   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:39.771223   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:40.271156   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:40.770628   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:41.271849   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:41.770753   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:42.270559   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:42.771172   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:43.271317   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:43.771813   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:44.270537   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:44.771652   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:45.271162   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:45.770821   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:46.271103   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:46.771120   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:47.270975   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:47.771378   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:48.271182   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:48.770836   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:49.270923   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:49.770739   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:50.270503   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:50.771530   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:51.271746   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:51.771452   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:52.271690   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:52.771378   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:53.271640   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:53.770601   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:54.271678   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:54.772774   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:55.271123   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:55.770862   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:56.270657   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:56.771790   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:57.270998   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:57.771327   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:58.271881   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:58.770577   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:59.271788   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:45:59.771768   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:00.270627   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:00.770724   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:01.270991   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:01.771782   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:02.270582   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:02.771220   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:03.270934   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:03.771105   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:04.271078   19617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:04.770531   19617 kapi.go:107] duration metric: took 1m17.502738823s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 16:46:04.771923   19617 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0914 16:46:04.773106   19617 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 16:46:04.774269   19617 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 16:46:04.775533   19617 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, helm-tiller, storage-provisioner-rancher, yakd, metrics-server, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0914 16:46:04.776865   19617 addons.go:510] duration metric: took 1m25.202232987s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner helm-tiller storage-provisioner-rancher yakd metrics-server inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0914 16:46:04.776922   19617 start.go:246] waiting for cluster config update ...
	I0914 16:46:04.776951   19617 start.go:255] writing updated cluster config ...
	I0914 16:46:04.777201   19617 exec_runner.go:51] Run: rm -f paused
	I0914 16:46:04.823931   19617 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 16:46:04.825678   19617 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Thu 2024-08-01 22:17:46 UTC, end at Sat 2024-09-14 16:55:58 UTC. --
	Sep 14 16:48:10 ubuntu-20-agent-2 cri-dockerd[20177]: time="2024-09-14T16:48:10Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 14 16:48:12 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:48:12.268916404Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 14 16:48:12 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:48:12.268916741Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 14 16:48:12 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:48:12.270565132Z" level=error msg="Error running exec 5850c8723a5f3914cc75d492c3c378f76d513035ce0e69b2dc71b946a873fd69 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 14 16:48:12 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:48:12.470185310Z" level=info msg="ignoring event" container=d1979caf895a7a9b62fd3900a19961843d70c6bfc882753eb9e900c94f171d81 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:48:17 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:48:17.996845189Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 14 16:48:17 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:48:17.999575691Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 14 16:49:38 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:49:38.996015499Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 14 16:49:38 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:49:38.998240531Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 14 16:50:57 ubuntu-20-agent-2 cri-dockerd[20177]: time="2024-09-14T16:50:57Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 14 16:50:59 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:50:59.422289200Z" level=info msg="ignoring event" container=c31737f1453dc5b2cf5e2280fb9b0ba561ad427a4f767635cb5cfa31ad99c416 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:52:22 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:52:22.991664088Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 14 16:52:22 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:52:22.994026608Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 14 16:54:58 ubuntu-20-agent-2 cri-dockerd[20177]: time="2024-09-14T16:54:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d5f5a06d665aaedfb4e0140f8f39a145b454f961f23a039e9562bc178467bfe9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 14 16:54:58 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:54:58.656195632Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 14 16:54:58 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:54:58.660560962Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 14 16:55:13 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:55:13.994938601Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 14 16:55:13 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:55:13.997032991Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 14 16:55:39 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:55:39.984872344Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 14 16:55:39 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:55:39.986993075Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 14 16:55:58 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:55:58.091258764Z" level=info msg="ignoring event" container=d5f5a06d665aaedfb4e0140f8f39a145b454f961f23a039e9562bc178467bfe9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:55:58 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:55:58.390910442Z" level=info msg="ignoring event" container=70aa5086c6f87cdd7d6b0a34b6da95eac377bdb3a3781ee634611854df7211ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:55:58 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:55:58.452150905Z" level=info msg="ignoring event" container=013ea250725eddda2c52ba529a017a1eccf7a91616f516099de372a56ec277c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:55:58 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:55:58.539589506Z" level=info msg="ignoring event" container=4465f5d53b5ce0180855c32a59e3bd9643edcb648925d37c27c2663305261fb8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:55:58 ubuntu-20-agent-2 dockerd[19849]: time="2024-09-14T16:55:58.622912215Z" level=info msg="ignoring event" container=09ab6598ca828c0ab88cf28f4755c1a18647fd2c32452a3acafb689c22f1239a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	c31737f1453dc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            5 minutes ago       Exited              gadget                                   6                   7a7f7955e798c       gadget-87j8c
	ba377c86f8e09       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   0a3226599d679       gcp-auth-89d5ffd79-mnttp
	4fb6da73473b6       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   50eee81894e34       csi-hostpathplugin-ztdqz
	a2310319c8b72       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   50eee81894e34       csi-hostpathplugin-ztdqz
	fb400847e41e1       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   50eee81894e34       csi-hostpathplugin-ztdqz
	8301d6a800e94       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   50eee81894e34       csi-hostpathplugin-ztdqz
	0a2464192e706       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   50eee81894e34       csi-hostpathplugin-ztdqz
	51161403e6d36       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   0ed140f145e2e       csi-hostpath-resizer-0
	22cd95f39244a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   50eee81894e34       csi-hostpathplugin-ztdqz
	052a79e868fb3       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   e5f0d7a1ddc8f       csi-hostpath-attacher-0
	b6e75ae291843       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   ad22b67256c68       snapshot-controller-56fcc65765-9qxmm
	20faf6da114af       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   80df624cb2bc0       snapshot-controller-56fcc65765-m9559
	e38cf52ae7b40       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   6ca8d64e43118       local-path-provisioner-86d989889c-d6r8s
	3698c87489c8a       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        10 minutes ago      Running             metrics-server                           0                   00c302b71e17a       metrics-server-84c5f94fbc-w6gkb
	2d9af601eeb96       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   7843ed11cb8db       yakd-dashboard-67d98fc6b-2d5zj
	64b9d6283dce2       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  10 minutes ago      Running             tiller                                   0                   5d8776e863901       tiller-deploy-b48cc5f79-fbdhl
	013ea250725ed       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              11 minutes ago      Exited              registry-proxy                           0                   09ab6598ca828       registry-proxy-bst86
	74b005214aeda       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   7b8c15f8f9e6d       nvidia-device-plugin-daemonset-x8gpg
	3e582c1a7bd82       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               11 minutes ago      Running             cloud-spanner-emulator                   0                   7cae87f75195c       cloud-spanner-emulator-769b77f747-6fz5w
	70aa5086c6f87       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             11 minutes ago      Exited              registry                                 0                   4465f5d53b5ce       registry-66c9cd494c-l64nd
	a930b42b64d7b       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   20e40add1ac5e       storage-provisioner
	576f1a84ac213       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   2186ff2fe47c2       kube-proxy-hwvcp
	5626f36b2a3db       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   f1b3ce1715153       coredns-7c65d6cfc9-ck9l6
	b998d64232e6b       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   c88add6370953       kube-apiserver-ubuntu-20-agent-2
	602b13986c8ab       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   a1bf51b7f8a96       kube-controller-manager-ubuntu-20-agent-2
	ed8822197cdd4       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   227b1f58013cd       etcd-ubuntu-20-agent-2
	3c6ac2d46a2ff       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   c652c9d6faec7       kube-scheduler-ubuntu-20-agent-2
	
	
	==> coredns [5626f36b2a3d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:38941 - 3016 "HINFO IN 3971535663268492602.2760440062148059176. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018379603s
	[INFO] 10.244.0.24:42988 - 46154 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000364835s
	[INFO] 10.244.0.24:50345 - 63075 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00049016s
	[INFO] 10.244.0.24:43600 - 46143 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00014248s
	[INFO] 10.244.0.24:60915 - 23176 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000132263s
	[INFO] 10.244.0.24:38658 - 20867 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128456s
	[INFO] 10.244.0.24:34276 - 14600 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125863s
	[INFO] 10.244.0.24:34047 - 39514 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004200062s
	[INFO] 10.244.0.24:39053 - 60939 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004321012s
	[INFO] 10.244.0.24:59080 - 9366 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003158848s
	[INFO] 10.244.0.24:42374 - 63685 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003398678s
	[INFO] 10.244.0.24:57810 - 60549 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00205651s
	[INFO] 10.244.0.24:38505 - 30938 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002196839s
	[INFO] 10.244.0.24:51495 - 38539 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001415742s
	[INFO] 10.244.0.24:34195 - 42297 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.002445151s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T16_44_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 16:44:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 16:55:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 16:51:43 +0000   Sat, 14 Sep 2024 16:44:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 16:51:43 +0000   Sat, 14 Sep 2024 16:44:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 16:51:43 +0000   Sat, 14 Sep 2024 16:44:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 16:51:43 +0000   Sat, 14 Sep 2024 16:44:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859308Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859308Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    9b88171d-ecb9-4de4-9c6a-0f636541be1e
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     cloud-spanner-emulator-769b77f747-6fz5w      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-87j8c                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-mnttp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-ck9l6                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-ztdqz                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-hwvcp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-w6gkb              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-x8gpg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-9qxmm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-m9559         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 tiller-deploy-b48cc5f79-fbdhl                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-d6r8s      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-2d5zj               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a2 e8 94 0a 85 fe 08 06
	[  +1.041989] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 8b 1d ac 2b a3 08 06
	[  +0.015199] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 3b d6 d0 be d6 08 06
	[  +2.656487] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ce dd a3 ae 14 cd 08 06
	[  +1.622910] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e 80 c1 4d 59 b0 08 06
	[  +2.006918] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 10 e6 7b 1f 00 08 06
	[  +4.566233] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 57 60 1c 5f b8 08 06
	[  +0.034429] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 23 e4 d8 d4 ad 08 06
	[  +0.913253] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a f4 3b 9b 9a 6d 08 06
	[ +33.659154] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 6e 63 89 de f8 08 06
	[  +0.026614] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 05 c6 69 91 64 08 06
	[Sep14 16:46] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af c2 e8 af 2e 08 06
	[  +0.000552] IPv4: martian source 10.244.0.24 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ee e2 5c 68 72 a8 08 06
	
	
	==> etcd [ed8822197cdd] <==
	{"level":"info","ts":"2024-09-14T16:44:30.910349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-14T16:44:30.911462Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T16:44:30.911480Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T16:44:30.911506Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T16:44:30.911736Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T16:44:30.911780Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T16:44:30.911526Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T16:44:30.912791Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T16:44:30.912950Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T16:44:30.913922Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T16:44:30.914205Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-14T16:44:30.914429Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T16:44:30.914498Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T16:44:30.914541Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T16:44:51.085249Z","caller":"traceutil/trace.go:171","msg":"trace[691678971] transaction","detail":"{read_only:false; response_revision:946; number_of_response:1; }","duration":"125.356324ms","start":"2024-09-14T16:44:50.959877Z","end":"2024-09-14T16:44:51.085233Z","steps":["trace[691678971] 'process raft request'  (duration: 125.230307ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:44:51.272182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.065497ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T16:44:51.272276Z","caller":"traceutil/trace.go:171","msg":"trace[1527461562] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:946; }","duration":"113.214099ms","start":"2024-09-14T16:44:51.159048Z","end":"2024-09-14T16:44:51.272262Z","steps":["trace[1527461562] 'range keys from in-memory index tree'  (duration: 112.994592ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T16:45:02.883962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.797683ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-14T16:45:02.884002Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.47488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-14T16:45:02.884032Z","caller":"traceutil/trace.go:171","msg":"trace[420793602] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:988; }","duration":"113.878654ms","start":"2024-09-14T16:45:02.770139Z","end":"2024-09-14T16:45:02.884018Z","steps":["trace[420793602] 'range keys from in-memory index tree'  (duration: 113.734853ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:45:02.884053Z","caller":"traceutil/trace.go:171","msg":"trace[777126678] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:988; }","duration":"110.537866ms","start":"2024-09-14T16:45:02.773503Z","end":"2024-09-14T16:45:02.884041Z","steps":["trace[777126678] 'range keys from in-memory index tree'  (duration: 110.426551ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T16:45:08.948746Z","caller":"traceutil/trace.go:171","msg":"trace[1419494061] transaction","detail":"{read_only:false; response_revision:1036; number_of_response:1; }","duration":"128.009848ms","start":"2024-09-14T16:45:08.820715Z","end":"2024-09-14T16:45:08.948725Z","steps":["trace[1419494061] 'process raft request'  (duration: 66.365395ms)","trace[1419494061] 'compare'  (duration: 61.557323ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-14T16:54:31.171070Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1754}
	{"level":"info","ts":"2024-09-14T16:54:31.193609Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1754,"took":"21.992045ms","hash":3336425864,"current-db-size-bytes":8425472,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":4493312,"current-db-size-in-use":"4.5 MB"}
	{"level":"info","ts":"2024-09-14T16:54:31.193656Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3336425864,"revision":1754,"compact-revision":-1}
	
	
	==> gcp-auth [ba377c86f8e0] <==
	2024/09/14 16:46:04 GCP Auth Webhook started!
	2024/09/14 16:46:22 Ready to marshal response ...
	2024/09/14 16:46:22 Ready to write response ...
	2024/09/14 16:46:22 Ready to marshal response ...
	2024/09/14 16:46:22 Ready to write response ...
	2024/09/14 16:46:45 Ready to marshal response ...
	2024/09/14 16:46:45 Ready to write response ...
	2024/09/14 16:46:45 Ready to marshal response ...
	2024/09/14 16:46:45 Ready to write response ...
	2024/09/14 16:46:45 Ready to marshal response ...
	2024/09/14 16:46:45 Ready to write response ...
	2024/09/14 16:54:57 Ready to marshal response ...
	2024/09/14 16:54:57 Ready to write response ...
	
	
	==> kernel <==
	 16:55:59 up 38 min,  0 users,  load average: 1.22, 0.53, 0.36
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [b998d64232e6] <==
	W0914 16:45:22.320037       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.89.162:443: connect: connection refused
	W0914 16:45:23.365633       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.89.162:443: connect: connection refused
	W0914 16:45:29.302590       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.8.53:443: connect: connection refused
	E0914 16:45:29.302629       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.8.53:443: connect: connection refused" logger="UnhandledError"
	W0914 16:45:50.286981       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.8.53:443: connect: connection refused
	E0914 16:45:50.287026       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.8.53:443: connect: connection refused" logger="UnhandledError"
	W0914 16:45:50.297511       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.8.53:443: connect: connection refused
	E0914 16:45:50.297548       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.8.53:443: connect: connection refused" logger="UnhandledError"
	I0914 16:46:22.086020       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0914 16:46:22.103301       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0914 16:46:35.498967       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0914 16:46:35.504634       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0914 16:46:35.629549       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0914 16:46:35.630222       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0914 16:46:35.646439       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0914 16:46:35.806749       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0914 16:46:35.814074       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0914 16:46:35.866583       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0914 16:46:36.642671       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0914 16:46:36.823682       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0914 16:46:36.823714       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0914 16:46:36.823728       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0914 16:46:36.866405       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0914 16:46:36.866601       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0914 16:46:37.026353       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [602b13986c8a] <==
	W0914 16:54:57.097148       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:54:57.097194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:54:58.757558       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:54:58.757614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:02.591305       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:02.591354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:06.472219       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:06.472270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:08.442579       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:08.442626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:22.360488       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:22.360526       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:36.558603       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:36.558643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:37.671524       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:37.671564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:38.150524       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:38.150568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:41.281994       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:41.282054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:41.341829       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:41.341869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:55:49.967343       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:55:49.967394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 16:55:58.339538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="7.146µs"
	
	
	==> kube-proxy [576f1a84ac21] <==
	I0914 16:44:42.100383       1 server_linux.go:66] "Using iptables proxy"
	I0914 16:44:42.250712       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0914 16:44:42.250850       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 16:44:42.357568       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 16:44:42.357645       1 server_linux.go:169] "Using iptables Proxier"
	I0914 16:44:42.369181       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 16:44:42.369663       1 server.go:483] "Version info" version="v1.31.1"
	I0914 16:44:42.369692       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 16:44:42.371724       1 config.go:199] "Starting service config controller"
	I0914 16:44:42.371736       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 16:44:42.371758       1 config.go:105] "Starting endpoint slice config controller"
	I0914 16:44:42.371763       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 16:44:42.372418       1 config.go:328] "Starting node config controller"
	I0914 16:44:42.372432       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 16:44:42.473926       1 shared_informer.go:320] Caches are synced for node config
	I0914 16:44:42.473972       1 shared_informer.go:320] Caches are synced for service config
	I0914 16:44:42.474021       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3c6ac2d46a2f] <==
	E0914 16:44:32.078097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0914 16:44:32.078156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:32.077764       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 16:44:32.078185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:32.077808       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 16:44:32.078210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:32.077849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 16:44:32.078243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:32.077869       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 16:44:32.078273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:32.967479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 16:44:32.967521       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:32.993161       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 16:44:32.993208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:33.075390       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 16:44:33.075434       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 16:44:33.096942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 16:44:33.097000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:33.177361       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 16:44:33.177407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:33.197767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 16:44:33.197816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:33.225204       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 16:44:33.225244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0914 16:44:35.176510       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Thu 2024-08-01 22:17:46 UTC, end at Sat 2024-09-14 16:55:59 UTC. --
	Sep 14 16:55:50 ubuntu-20-agent-2 kubelet[21055]: E0914 16:55:50.850795   21055 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="d0bb631a-c5e9-4374-8700-a6f870533bbb"
	Sep 14 16:55:57 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:57.849300   21055 scope.go:117] "RemoveContainer" containerID="c31737f1453dc5b2cf5e2280fb9b0ba561ad427a4f767635cb5cfa31ad99c416"
	Sep 14 16:55:57 ubuntu-20-agent-2 kubelet[21055]: E0914 16:55:57.849452   21055 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-87j8c_gadget(7db2b86c-68c9-413f-9848-66a94176a61d)\"" pod="gadget/gadget-87j8c" podUID="7db2b86c-68c9-413f-9848-66a94176a61d"
	Sep 14 16:55:57 ubuntu-20-agent-2 kubelet[21055]: E0914 16:55:57.850995   21055 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="ea4e581d-2f29-43a6-861f-2e7f9afad03e"
	Sep 14 16:55:58 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:58.216462   21055 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d0bb631a-c5e9-4374-8700-a6f870533bbb-gcp-creds\") pod \"d0bb631a-c5e9-4374-8700-a6f870533bbb\" (UID: \"d0bb631a-c5e9-4374-8700-a6f870533bbb\") "
	Sep 14 16:55:58 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:58.216543   21055 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9glzl\" (UniqueName: \"kubernetes.io/projected/d0bb631a-c5e9-4374-8700-a6f870533bbb-kube-api-access-9glzl\") pod \"d0bb631a-c5e9-4374-8700-a6f870533bbb\" (UID: \"d0bb631a-c5e9-4374-8700-a6f870533bbb\") "
	Sep 14 16:55:58 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:58.216563   21055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0bb631a-c5e9-4374-8700-a6f870533bbb-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "d0bb631a-c5e9-4374-8700-a6f870533bbb" (UID: "d0bb631a-c5e9-4374-8700-a6f870533bbb"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 14 16:55:58 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:58.216648   21055 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d0bb631a-c5e9-4374-8700-a6f870533bbb-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 14 16:55:58 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:58.218551   21055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0bb631a-c5e9-4374-8700-a6f870533bbb-kube-api-access-9glzl" (OuterVolumeSpecName: "kube-api-access-9glzl") pod "d0bb631a-c5e9-4374-8700-a6f870533bbb" (UID: "d0bb631a-c5e9-4374-8700-a6f870533bbb"). InnerVolumeSpecName "kube-api-access-9glzl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 16:55:58 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:58.317689   21055 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9glzl\" (UniqueName: \"kubernetes.io/projected/d0bb631a-c5e9-4374-8700-a6f870533bbb-kube-api-access-9glzl\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 14 16:55:58 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:58.619850   21055 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw5q4\" (UniqueName: \"kubernetes.io/projected/a3cef9f1-3478-4b92-84d9-5e8f21c3ec9c-kube-api-access-zw5q4\") pod \"a3cef9f1-3478-4b92-84d9-5e8f21c3ec9c\" (UID: \"a3cef9f1-3478-4b92-84d9-5e8f21c3ec9c\") "
	Sep 14 16:55:58 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:58.622073   21055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3cef9f1-3478-4b92-84d9-5e8f21c3ec9c-kube-api-access-zw5q4" (OuterVolumeSpecName: "kube-api-access-zw5q4") pod "a3cef9f1-3478-4b92-84d9-5e8f21c3ec9c" (UID: "a3cef9f1-3478-4b92-84d9-5e8f21c3ec9c"). InnerVolumeSpecName "kube-api-access-zw5q4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 16:55:58 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:58.720280   21055 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp42s\" (UniqueName: \"kubernetes.io/projected/04bf6491-0898-4738-9ad3-f4f343173ece-kube-api-access-pp42s\") pod \"04bf6491-0898-4738-9ad3-f4f343173ece\" (UID: \"04bf6491-0898-4738-9ad3-f4f343173ece\") "
	Sep 14 16:55:58 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:58.720409   21055 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zw5q4\" (UniqueName: \"kubernetes.io/projected/a3cef9f1-3478-4b92-84d9-5e8f21c3ec9c-kube-api-access-zw5q4\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 14 16:55:58 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:58.722091   21055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04bf6491-0898-4738-9ad3-f4f343173ece-kube-api-access-pp42s" (OuterVolumeSpecName: "kube-api-access-pp42s") pod "04bf6491-0898-4738-9ad3-f4f343173ece" (UID: "04bf6491-0898-4738-9ad3-f4f343173ece"). InnerVolumeSpecName "kube-api-access-pp42s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 16:55:58 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:58.821607   21055 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pp42s\" (UniqueName: \"kubernetes.io/projected/04bf6491-0898-4738-9ad3-f4f343173ece-kube-api-access-pp42s\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 14 16:55:58 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:58.861280   21055 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0bb631a-c5e9-4374-8700-a6f870533bbb" path="/var/lib/kubelet/pods/d0bb631a-c5e9-4374-8700-a6f870533bbb/volumes"
	Sep 14 16:55:59 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:59.181842   21055 scope.go:117] "RemoveContainer" containerID="013ea250725eddda2c52ba529a017a1eccf7a91616f516099de372a56ec277c4"
	Sep 14 16:55:59 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:59.198455   21055 scope.go:117] "RemoveContainer" containerID="013ea250725eddda2c52ba529a017a1eccf7a91616f516099de372a56ec277c4"
	Sep 14 16:55:59 ubuntu-20-agent-2 kubelet[21055]: E0914 16:55:59.199387   21055 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 013ea250725eddda2c52ba529a017a1eccf7a91616f516099de372a56ec277c4" containerID="013ea250725eddda2c52ba529a017a1eccf7a91616f516099de372a56ec277c4"
	Sep 14 16:55:59 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:59.199443   21055 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"013ea250725eddda2c52ba529a017a1eccf7a91616f516099de372a56ec277c4"} err="failed to get container status \"013ea250725eddda2c52ba529a017a1eccf7a91616f516099de372a56ec277c4\": rpc error: code = Unknown desc = Error response from daemon: No such container: 013ea250725eddda2c52ba529a017a1eccf7a91616f516099de372a56ec277c4"
	Sep 14 16:55:59 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:59.199470   21055 scope.go:117] "RemoveContainer" containerID="70aa5086c6f87cdd7d6b0a34b6da95eac377bdb3a3781ee634611854df7211ed"
	Sep 14 16:55:59 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:59.217667   21055 scope.go:117] "RemoveContainer" containerID="70aa5086c6f87cdd7d6b0a34b6da95eac377bdb3a3781ee634611854df7211ed"
	Sep 14 16:55:59 ubuntu-20-agent-2 kubelet[21055]: E0914 16:55:59.218520   21055 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 70aa5086c6f87cdd7d6b0a34b6da95eac377bdb3a3781ee634611854df7211ed" containerID="70aa5086c6f87cdd7d6b0a34b6da95eac377bdb3a3781ee634611854df7211ed"
	Sep 14 16:55:59 ubuntu-20-agent-2 kubelet[21055]: I0914 16:55:59.218563   21055 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"70aa5086c6f87cdd7d6b0a34b6da95eac377bdb3a3781ee634611854df7211ed"} err="failed to get container status \"70aa5086c6f87cdd7d6b0a34b6da95eac377bdb3a3781ee634611854df7211ed\": rpc error: code = Unknown desc = Error response from daemon: No such container: 70aa5086c6f87cdd7d6b0a34b6da95eac377bdb3a3781ee634611854df7211ed"
	
	
	==> storage-provisioner [a930b42b64d7] <==
	I0914 16:44:42.398145       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 16:44:42.408731       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 16:44:42.408785       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 16:44:42.425084       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 16:44:42.425281       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_015c838e-8893-42ff-ae4c-630090988d72!
	I0914 16:44:42.426397       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"049bc555-ec55-43a8-a421-f4e27e5b84d0", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_015c838e-8893-42ff-ae4c-630090988d72 became leader
	I0914 16:44:42.525468       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_015c838e-8893-42ff-ae4c-630090988d72!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-2/10.138.0.48
	Start Time:       Sat, 14 Sep 2024 16:46:45 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kjdc4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kjdc4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-2
	  Normal   Pulling    7m42s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m41s (x4 over 9m13s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m41s (x4 over 9m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m30s (x6 over 9m13s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m4s (x21 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.83s)

                                                
                                    

Test pass (111/168)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 1.44
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 1.02
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.56
22 TestOffline 43.38
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 102.52
29 TestAddons/serial/Volcano 40.64
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 10.44
36 TestAddons/parallel/MetricsServer 5.38
37 TestAddons/parallel/HelmTiller 9.49
39 TestAddons/parallel/CSI 50.36
40 TestAddons/parallel/Headlamp 15.86
41 TestAddons/parallel/CloudSpanner 5.26
43 TestAddons/parallel/NvidiaDevicePlugin 6.23
44 TestAddons/parallel/Yakd 10.4
45 TestAddons/StoppedEnableDisable 10.69
47 TestCertExpiration 227.96
58 TestFunctional/serial/CopySyncFile 0
59 TestFunctional/serial/StartWithProxy 30.01
60 TestFunctional/serial/AuditLog 0
61 TestFunctional/serial/SoftStart 25.52
62 TestFunctional/serial/KubeContext 0.04
63 TestFunctional/serial/KubectlGetPods 0.07
65 TestFunctional/serial/MinikubeKubectlCmd 0.1
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
67 TestFunctional/serial/ExtraConfig 34.87
68 TestFunctional/serial/ComponentHealth 0.07
69 TestFunctional/serial/LogsCmd 0.82
70 TestFunctional/serial/LogsFileCmd 0.84
71 TestFunctional/serial/InvalidService 4.27
73 TestFunctional/parallel/ConfigCmd 0.26
74 TestFunctional/parallel/DashboardCmd 7.68
75 TestFunctional/parallel/DryRun 0.15
76 TestFunctional/parallel/InternationalLanguage 0.08
77 TestFunctional/parallel/StatusCmd 0.41
80 TestFunctional/parallel/ProfileCmd/profile_not_create 0.23
81 TestFunctional/parallel/ProfileCmd/profile_list 0.21
82 TestFunctional/parallel/ProfileCmd/profile_json_output 0.21
84 TestFunctional/parallel/ServiceCmd/DeployApp 9.14
85 TestFunctional/parallel/ServiceCmd/List 0.33
86 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
87 TestFunctional/parallel/ServiceCmd/HTTPS 0.15
88 TestFunctional/parallel/ServiceCmd/Format 0.15
89 TestFunctional/parallel/ServiceCmd/URL 0.15
90 TestFunctional/parallel/ServiceCmdConnect 6.3
91 TestFunctional/parallel/AddonsCmd 0.11
92 TestFunctional/parallel/PersistentVolumeClaim 22.77
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.26
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.19
99 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
100 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
104 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
107 TestFunctional/parallel/MySQL 20.43
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 14.67
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 12.91
116 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/Version/short 0.04
121 TestFunctional/parallel/Version/components 0.39
122 TestFunctional/parallel/License 0.2
123 TestFunctional/delete_echo-server_images 0.03
124 TestFunctional/delete_my-image_image 0.02
125 TestFunctional/delete_minikube_cached_images 0.02
130 TestImageBuild/serial/Setup 13.63
131 TestImageBuild/serial/NormalBuild 1.53
132 TestImageBuild/serial/BuildWithBuildArg 0.79
133 TestImageBuild/serial/BuildWithDockerIgnore 0.57
134 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.56
138 TestJSONOutput/start/Command 29.07
139 TestJSONOutput/start/Audit 0
141 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
142 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
144 TestJSONOutput/pause/Command 0.48
145 TestJSONOutput/pause/Audit 0
147 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/unpause/Command 0.38
151 TestJSONOutput/unpause/Audit 0
153 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/stop/Command 5.33
157 TestJSONOutput/stop/Audit 0
159 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
161 TestErrorJSONOutput 0.19
166 TestMainNoArgs 0.04
167 TestMinikubeProfile 32.77
175 TestPause/serial/Start 23.48
176 TestPause/serial/SecondStartNoReconfiguration 25.97
177 TestPause/serial/Pause 0.49
178 TestPause/serial/VerifyStatus 0.13
179 TestPause/serial/Unpause 0.39
180 TestPause/serial/PauseAgain 0.54
181 TestPause/serial/DeletePaused 1.59
182 TestPause/serial/VerifyDeletedResources 0.06
196 TestRunningBinaryUpgrade 69.4
198 TestStoppedBinaryUpgrade/Setup 0.43
199 TestStoppedBinaryUpgrade/Upgrade 50.29
200 TestStoppedBinaryUpgrade/MinikubeLogs 0.81
201 TestKubernetesUpgrade 306.75
x
+
TestDownloadOnly/v1.20.0/json-events (1.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.442188293s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (1.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (56.64041ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 16:43:35
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 16:43:35.044121   15485 out.go:345] Setting OutFile to fd 1 ...
	I0914 16:43:35.044211   15485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:43:35.044215   15485 out.go:358] Setting ErrFile to fd 2...
	I0914 16:43:35.044220   15485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:43:35.044417   15485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8655/.minikube/bin
	W0914 16:43:35.044535   15485 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19643-8655/.minikube/config/config.json: open /home/jenkins/minikube-integration/19643-8655/.minikube/config/config.json: no such file or directory
	I0914 16:43:35.045057   15485 out.go:352] Setting JSON to true
	I0914 16:43:35.045930   15485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1564,"bootTime":1726330651,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 16:43:35.046016   15485 start.go:139] virtualization: kvm guest
	I0914 16:43:35.048314   15485 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0914 16:43:35.048455   15485 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19643-8655/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 16:43:35.048480   15485 notify.go:220] Checking for updates...
	I0914 16:43:35.049816   15485 out.go:169] MINIKUBE_LOCATION=19643
	I0914 16:43:35.051204   15485 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 16:43:35.052614   15485 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19643-8655/kubeconfig
	I0914 16:43:35.053931   15485 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8655/.minikube
	I0914 16:43:35.055375   15485 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (1.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.017793508s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (1.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (59.229524ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 16:43:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 16:43:36.776797   15638 out.go:345] Setting OutFile to fd 1 ...
	I0914 16:43:36.776936   15638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:43:36.776946   15638 out.go:358] Setting ErrFile to fd 2...
	I0914 16:43:36.776953   15638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:43:36.777147   15638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8655/.minikube/bin
	I0914 16:43:36.777717   15638 out.go:352] Setting JSON to true
	I0914 16:43:36.778580   15638 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1566,"bootTime":1726330651,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 16:43:36.778674   15638 start.go:139] virtualization: kvm guest
	I0914 16:43:36.780779   15638 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0914 16:43:36.780902   15638 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19643-8655/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 16:43:36.780955   15638 notify.go:220] Checking for updates...
	I0914 16:43:36.782390   15638 out.go:169] MINIKUBE_LOCATION=19643
	I0914 16:43:36.783858   15638 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 16:43:36.785114   15638 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19643-8655/kubeconfig
	I0914 16:43:36.786282   15638 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8655/.minikube
	I0914 16:43:36.787643   15638 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:38793 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (43.38s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (41.758361301s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.61718793s)
--- PASS: TestOffline (43.38s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (46.809102ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (46.818908ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (102.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller: (1m42.514809045s)
--- PASS: TestAddons/Setup (102.52s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.64s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 7.959449ms
addons_test.go:897: volcano-scheduler stabilized in 8.172756ms
addons_test.go:905: volcano-admission stabilized in 8.23278ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-x6xj8" [635d8e7e-87e9-4a86-bb99-7da506b77fd9] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.002985681s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-4r4rj" [c8f86993-6435-4c6b-969b-d06074ff6972] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.00382207s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-8wqpr" [95b900aa-9e34-4380-a18d-d0217c2bc7a0] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.003281198s
addons_test.go:932: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [4c3b1c5d-b21c-4383-9c05-b33ddeb29a34] Pending
helpers_test.go:344: "test-job-nginx-0" [4c3b1c5d-b21c-4383-9c05-b33ddeb29a34] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [4c3b1c5d-b21c-4383-9c05-b33ddeb29a34] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004267636s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.309693371s)
--- PASS: TestAddons/serial/Volcano (40.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.44s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-87j8c" [7db2b86c-68c9-413f-9848-66a94176a61d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003504843s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.433811769s)
--- PASS: TestAddons/parallel/InspektorGadget (10.44s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.202162ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-w6gkb" [a30809cd-34d2-40f2-bc91-25cf59d4d63f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003842685s
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.38s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.49s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.003299ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-fbdhl" [715c342f-6073-4f83-840f-a4843a421dc6] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00367094s
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.203005487s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.49s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.737889ms
addons_test.go:570: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4af72567-a0c3-4d5e-87a9-18ade5cceb2d] Pending
helpers_test.go:344: "task-pv-pod" [4af72567-a0c3-4d5e-87a9-18ade5cceb2d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4af72567-a0c3-4d5e-87a9-18ade5cceb2d] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.00371435s
addons_test.go:590: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [12c7a1d9-5099-4272-ab59-9450a474ddcb] Pending
helpers_test.go:344: "task-pv-pod-restore" [12c7a1d9-5099-4272-ab59-9450a474ddcb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [12c7a1d9-5099-4272-ab59-9450a474ddcb] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004018042s
addons_test.go:632: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.274956333s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.36s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-nhtzs" [8fecdd97-1c1a-49ef-915c-409980fb2874] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-nhtzs" [8fecdd97-1c1a-49ef-915c-409980fb2874] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003398529s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.390351814s)
--- PASS: TestAddons/parallel/Headlamp (15.86s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-6fz5w" [9829ba72-1afd-45f3-93ad-49e802ec2b61] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00391204s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-x8gpg" [4cdaa783-c778-4569-b473-095114459f82] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003815288s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.23s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-2d5zj" [1a625b9b-2f06-45c6-af25-e83808ee5ce0] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003420704s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.393066473s)
--- PASS: TestAddons/parallel/Yakd (10.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.69s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.39717861s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.69s)

                                                
                                    
x
+
TestCertExpiration (227.96s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.146159362s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (32.167121797s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.646139191s)
--- PASS: TestCertExpiration (227.96s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19643-8655/.minikube/files/etc/test/nested/copy/15473/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (30.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (30.007611515s)
--- PASS: TestFunctional/serial/StartWithProxy (30.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.52s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (25.514156791s)
functional_test.go:663: soft start took 25.514988627s for "minikube" cluster.
--- PASS: TestFunctional/serial/SoftStart (25.52s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.87s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.869452526s)
functional_test.go:761: restart took 34.86955016s for "minikube" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.87s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.82s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd1751672710/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.84s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (153.027708ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://10.138.0.48:30808 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (39.933522ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (40.473779ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/14 17:03:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 51019: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.68s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (77.448556ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-8655/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8655/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:03:37.609767   51401 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:03:37.609882   51401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:03:37.609893   51401 out.go:358] Setting ErrFile to fd 2...
	I0914 17:03:37.609899   51401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:03:37.610073   51401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8655/.minikube/bin
	I0914 17:03:37.610559   51401 out.go:352] Setting JSON to false
	I0914 17:03:37.611557   51401 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2767,"bootTime":1726330651,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 17:03:37.611643   51401 start.go:139] virtualization: kvm guest
	I0914 17:03:37.613872   51401 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0914 17:03:37.615386   51401 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19643-8655/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 17:03:37.615415   51401 notify.go:220] Checking for updates...
	I0914 17:03:37.615425   51401 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:03:37.616807   51401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:03:37.618025   51401 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8655/kubeconfig
	I0914 17:03:37.619377   51401 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8655/.minikube
	I0914 17:03:37.620752   51401 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 17:03:37.622040   51401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:03:37.623637   51401 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 17:03:37.623978   51401 exec_runner.go:51] Run: systemctl --version
	I0914 17:03:37.626950   51401 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:03:37.639013   51401 out.go:177] * Using the none driver based on existing profile
	I0914 17:03:37.640231   51401 start.go:297] selected driver: none
	I0914 17:03:37.640251   51401 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:03:37.640402   51401 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:03:37.640427   51401 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0914 17:03:37.640875   51401 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0914 17:03:37.643079   51401 out.go:201] 
	W0914 17:03:37.644388   51401 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0914 17:03:37.645685   51401 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (79.964953ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-8655/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8655/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:03:37.764543   51430 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:03:37.764646   51430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:03:37.764655   51430 out.go:358] Setting ErrFile to fd 2...
	I0914 17:03:37.764659   51430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:03:37.764955   51430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-8655/.minikube/bin
	I0914 17:03:37.765644   51430 out.go:352] Setting JSON to false
	I0914 17:03:37.766651   51430 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2767,"bootTime":1726330651,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 17:03:37.766744   51430 start.go:139] virtualization: kvm guest
	I0914 17:03:37.768733   51430 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0914 17:03:37.770927   51430 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19643-8655/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 17:03:37.770963   51430 notify.go:220] Checking for updates...
	I0914 17:03:37.770976   51430 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:03:37.772512   51430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:03:37.773866   51430 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-8655/kubeconfig
	I0914 17:03:37.775297   51430 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8655/.minikube
	I0914 17:03:37.776542   51430 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 17:03:37.777794   51430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:03:37.779602   51430 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 17:03:37.780070   51430 exec_runner.go:51] Run: systemctl --version
	I0914 17:03:37.782623   51430 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:03:37.793653   51430 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0914 17:03:37.795021   51430 start.go:297] selected driver: none
	I0914 17:03:37.795038   51430 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:03:37.795166   51430 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:03:37.795192   51430 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0914 17:03:37.795639   51430 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0914 17:03:37.797803   51430 out.go:201] 
	W0914 17:03:37.798943   51430 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0914 17:03:37.800193   51430 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "166.685181ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "44.29601ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "170.223688ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "43.00504ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-575k2" [64832216-5395-492b-9c50-465e2788daf7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-575k2" [64832216-5395-492b-9c50-465e2788daf7] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003312774s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "327.121842ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.138.0.48:31382
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.138.0.48:31382
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-hh9ph" [8e9a18bb-132f-4cbe-a983-ea8374ac3bc5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-hh9ph" [8e9a18bb-132f-4cbe-a983-ea8374ac3bc5] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.003446067s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.138.0.48:30130
functional_test.go:1675: http://10.138.0.48:30130: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-hh9ph

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.138.0.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.138.0.48:30130
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.30s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [88bcf127-e792-4ac2-b952-38f8633b7db5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003368789s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a58bc81c-8cdb-4da4-a1f9-df24f2aab512] Pending
helpers_test.go:344: "sp-pod" [a58bc81c-8cdb-4da4-a1f9-df24f2aab512] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a58bc81c-8cdb-4da4-a1f9-df24f2aab512] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004206559s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml: (1.096787873s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fe3fbb69-e5b4-4170-abb8-c52bd7eb969e] Pending
helpers_test.go:344: "sp-pod" [fe3fbb69-e5b4-4170-abb8-c52bd7eb969e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fe3fbb69-e5b4-4170-abb8-c52bd7eb969e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003925615s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 53161: operation not permitted
helpers_test.go:508: unable to kill pid 53112: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context minikube apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [aaaf034c-e150-4e79-b7c0-7bddb5073191] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [aaaf034c-e150-4e79-b7c0-7bddb5073191] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003284248s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context minikube get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.76.99 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-g7drz" [992da698-c412-4039-9004-354861114963] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-g7drz" [992da698-c412-4039-9004-354861114963] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.00289645s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-g7drz -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-g7drz -- mysql -ppassword -e "show databases;": exit status 1 (112.030548ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-g7drz -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-g7drz -- mysql -ppassword -e "show databases;": exit status 1 (109.612799ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-g7drz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.668758974s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (12.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (12.912128484s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (12.91s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (13.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.630257247s)
--- PASS: TestImageBuild/serial/Setup (13.63s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.529621617s)
--- PASS: TestImageBuild/serial/NormalBuild (1.53s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.79s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.57s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.57s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (29.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (29.070532199s)
--- PASS: TestJSONOutput/start/Command (29.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.38s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.38s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (5.327361827s)
--- PASS: TestJSONOutput/stop/Command (5.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.730947ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bbe3e2b2-0150-48bc-bbca-c655fe07bb94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7dce7a49-5aa6-446a-9517-6cc86135881b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19643"}}
	{"specversion":"1.0","id":"1deee7de-404e-46c6-b838-f05174419534","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"639bd25c-9111-43ca-9237-77385fbc5e80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19643-8655/kubeconfig"}}
	{"specversion":"1.0","id":"87093bba-79b5-4228-a246-f25da5297cd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8655/.minikube"}}
	{"specversion":"1.0","id":"5683099e-5600-4fe2-8dc6-2989643e45b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"945533ec-a65b-4270-af4e-a84980b31879","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9e74e775-70a5-497f-927f-1494f9ba9495","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (32.77s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
E0914 17:06:04.834407   15473 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:06:04.841761   15473 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:06:04.853185   15473 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:06:04.874624   15473 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:06:04.916054   15473 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:06:04.997671   15473 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:06:05.159165   15473 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:06:05.480834   15473 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-8655/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.414935969s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.470621909s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.280547898s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (32.77s)

                                                
                                    
x
+
TestPause/serial/Start (23.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (23.477135041s)
--- PASS: TestPause/serial/Start (23.48s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (25.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (25.965954688s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (25.97s)

                                                
                                    
x
+
TestPause/serial/Pause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.49s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (126.007959ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.39s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.39s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.54s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.54s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.59s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.594316425s)
--- PASS: TestPause/serial/DeletePaused (1.59s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3278544753 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3278544753 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (30.21170674s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (35.273930151s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.180672844s)
--- PASS: TestRunningBinaryUpgrade (69.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (50.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.354205195 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.354205195 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.859904936s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.354205195 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.354205195 -p minikube stop: (23.629500067s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (11.800277638s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (50.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (306.75s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (29.813693704s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.319280693s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (70.223115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m16.22426295s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (63.972209ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-8655/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-8655/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (17.996257319s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.201531831s)
--- PASS: TestKubernetesUpgrade (306.75s)

                                                
                                    

Test skip (56/168)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
38 TestAddons/parallel/Olm 0
42 TestAddons/parallel/LocalPath 0
46 TestCertOptions 0
48 TestDockerFlags 0
49 TestForceSystemdFlag 0
50 TestForceSystemdEnv 0
51 TestDockerEnvContainerd 0
52 TestKVMDriverInstallOrUpdate 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
55 TestErrorSpam 0
64 TestFunctional/serial/CacheCmd 0
78 TestFunctional/parallel/MountCmd 0
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
102 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
103 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
105 TestFunctional/parallel/SSHCmd 0
106 TestFunctional/parallel/CpCmd 0
108 TestFunctional/parallel/FileSync 0
109 TestFunctional/parallel/CertSync 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/ImageCommands 0
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0
126 TestGvisorAddon 0
127 TestMultiControlPlane 0
135 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
162 TestKicCustomNetwork 0
163 TestKicExistingNetwork 0
164 TestKicCustomSubnet 0
165 TestKicStaticIP 0
168 TestMountStart 0
169 TestMultiNode 0
170 TestNetworkPlugins 0
171 TestNoKubernetes 0
172 TestChangeNoneUser 0
183 TestPreload 0
184 TestScheduledStopWindows 0
185 TestScheduledStopUnix 0
186 TestSkaffold 0
189 TestStartStop/group/old-k8s-version 0.13
190 TestStartStop/group/newest-cni 0.13
191 TestStartStop/group/default-k8s-diff-port 0.13
192 TestStartStop/group/no-preload 0.13
193 TestStartStop/group/disable-driver-mounts 0.13
194 TestStartStop/group/embed-certs 0.13
195 TestInsufficientStorage 0
202 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:198: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:978: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.13s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard