Test Report: none_Linux 19616

                    
                      ead8b21730629246ae204938704f78710656bdeb:2024-09-12:36186
                    
                

Test fail (1/168)

Order failed test Duration
33 TestAddons/parallel/Registry 71.83
x
+
TestAddons/parallel/Registry (71.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.59092ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-2mldr" [d87e815f-a8f5-4d9c-921b-fdc6c76b6645] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003315117s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-g4qfg" [7bb49b8b-92a4-44db-b384-d1ec8357b811] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003015783s
addons_test.go:342: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.083341278s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/12 21:41:25 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC | 12 Sep 24 21:28 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC | 12 Sep 24 21:28 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC | 12 Sep 24 21:28 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC | 12 Sep 24 21:28 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC | 12 Sep 24 21:28 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC | 12 Sep 24 21:28 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:34985               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC | 12 Sep 24 21:28 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC | 12 Sep 24 21:29 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:29 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 12 Sep 24 21:29 UTC | 12 Sep 24 21:31 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 12 Sep 24 21:32 UTC | 12 Sep 24 21:32 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 12 Sep 24 21:41 UTC | 12 Sep 24 21:41 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:29:53
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:29:53.593890   16645 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:29:53.593997   16645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:53.594008   16645 out.go:358] Setting ErrFile to fd 2...
	I0912 21:29:53.594013   16645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:29:53.594237   16645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5765/.minikube/bin
	I0912 21:29:53.594862   16645 out.go:352] Setting JSON to false
	I0912 21:29:53.595759   16645 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":745,"bootTime":1726175849,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:29:53.595811   16645 start.go:139] virtualization: kvm guest
	I0912 21:29:53.598043   16645 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0912 21:29:53.599418   16645 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19616-5765/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 21:29:53.599448   16645 notify.go:220] Checking for updates...
	I0912 21:29:53.599485   16645 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 21:29:53.600925   16645 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:29:53.602263   16645 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5765/kubeconfig
	I0912 21:29:53.603534   16645 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5765/.minikube
	I0912 21:29:53.604939   16645 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:29:53.606257   16645 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:29:53.607739   16645 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:29:53.617074   16645 out.go:177] * Using the none driver based on user configuration
	I0912 21:29:53.618132   16645 start.go:297] selected driver: none
	I0912 21:29:53.618151   16645 start.go:901] validating driver "none" against <nil>
	I0912 21:29:53.618166   16645 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:29:53.618215   16645 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0912 21:29:53.618553   16645 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0912 21:29:53.619112   16645 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:29:53.619313   16645 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:29:53.619386   16645 cni.go:84] Creating CNI manager for ""
	I0912 21:29:53.619406   16645 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:29:53.619424   16645 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 21:29:53.619483   16645 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:29:53.620884   16645 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0912 21:29:53.622211   16645 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/config.json ...
	I0912 21:29:53.622244   16645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/config.json: {Name:mkc20fa35724ada7b7b857a9b3548e2d8755b91c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:53.622411   16645 start.go:360] acquireMachinesLock for minikube: {Name:mk1c9e166a8b28b8c6723115fe4cb548118fc61e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 21:29:53.622458   16645 start.go:364] duration metric: took 29.844µs to acquireMachinesLock for "minikube"
	I0912 21:29:53.622476   16645 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 21:29:53.622567   16645 start.go:125] createHost starting for "" (driver="none")
	I0912 21:29:53.623843   16645 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0912 21:29:53.624913   16645 exec_runner.go:51] Run: systemctl --version
	I0912 21:29:53.627307   16645 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0912 21:29:53.627335   16645 client.go:168] LocalClient.Create starting
	I0912 21:29:53.627401   16645 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5765/.minikube/certs/ca.pem
	I0912 21:29:53.627428   16645 main.go:141] libmachine: Decoding PEM data...
	I0912 21:29:53.627444   16645 main.go:141] libmachine: Parsing certificate...
	I0912 21:29:53.627501   16645 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19616-5765/.minikube/certs/cert.pem
	I0912 21:29:53.627522   16645 main.go:141] libmachine: Decoding PEM data...
	I0912 21:29:53.627535   16645 main.go:141] libmachine: Parsing certificate...
	I0912 21:29:53.627849   16645 client.go:171] duration metric: took 507.415µs to LocalClient.Create
	I0912 21:29:53.627870   16645 start.go:167] duration metric: took 565.669µs to libmachine.API.Create "minikube"
	I0912 21:29:53.627876   16645 start.go:293] postStartSetup for "minikube" (driver="none")
	I0912 21:29:53.627912   16645 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:29:53.627945   16645 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:29:53.636798   16645 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 21:29:53.636819   16645 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 21:29:53.636827   16645 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 21:29:53.638541   16645 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0912 21:29:53.639726   16645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5765/.minikube/addons for local assets ...
	I0912 21:29:53.639813   16645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-5765/.minikube/files for local assets ...
	I0912 21:29:53.639844   16645 start.go:296] duration metric: took 11.957319ms for postStartSetup
	I0912 21:29:53.640396   16645 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/config.json ...
	I0912 21:29:53.640532   16645 start.go:128] duration metric: took 17.95534ms to createHost
	I0912 21:29:53.640544   16645 start.go:83] releasing machines lock for "minikube", held for 18.077526ms
	I0912 21:29:53.640870   16645 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 21:29:53.640993   16645 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0912 21:29:53.642896   16645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 21:29:53.642938   16645 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:29:53.651589   16645 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0912 21:29:53.651618   16645 start.go:495] detecting cgroup driver to use...
	I0912 21:29:53.651656   16645 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0912 21:29:53.651767   16645 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:29:53.679527   16645 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0912 21:29:53.689277   16645 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 21:29:53.699133   16645 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 21:29:53.699186   16645 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 21:29:53.707696   16645 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 21:29:53.717071   16645 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 21:29:53.726653   16645 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 21:29:53.735434   16645 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:29:53.744311   16645 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 21:29:53.752591   16645 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0912 21:29:53.760935   16645 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0912 21:29:53.770158   16645 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:29:53.777366   16645 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:29:53.784124   16645 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0912 21:29:53.985862   16645 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0912 21:29:54.055908   16645 start.go:495] detecting cgroup driver to use...
	I0912 21:29:54.055963   16645 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0912 21:29:54.056095   16645 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:29:54.074894   16645 exec_runner.go:51] Run: which cri-dockerd
	I0912 21:29:54.075808   16645 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 21:29:54.083341   16645 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0912 21:29:54.083363   16645 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0912 21:29:54.083395   16645 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0912 21:29:54.090545   16645 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0912 21:29:54.090680   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube935594180 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0912 21:29:54.098280   16645 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0912 21:29:54.299245   16645 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0912 21:29:54.515416   16645 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 21:29:54.515580   16645 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0912 21:29:54.515596   16645 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0912 21:29:54.515641   16645 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0912 21:29:54.523550   16645 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0912 21:29:54.523699   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube302174304 /etc/docker/daemon.json
	I0912 21:29:54.531268   16645 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0912 21:29:54.750025   16645 exec_runner.go:51] Run: sudo systemctl restart docker
	I0912 21:29:55.040837   16645 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0912 21:29:55.051090   16645 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0912 21:29:55.065836   16645 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 21:29:55.076131   16645 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0912 21:29:55.297654   16645 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0912 21:29:55.506760   16645 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0912 21:29:55.702464   16645 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0912 21:29:55.716604   16645 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 21:29:55.726639   16645 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0912 21:29:55.933673   16645 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0912 21:29:56.002718   16645 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0912 21:29:56.002807   16645 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0912 21:29:56.004167   16645 start.go:563] Will wait 60s for crictl version
	I0912 21:29:56.004209   16645 exec_runner.go:51] Run: which crictl
	I0912 21:29:56.005187   16645 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0912 21:29:56.033280   16645 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0912 21:29:56.033340   16645 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0912 21:29:56.053370   16645 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0912 21:29:56.074600   16645 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0912 21:29:56.074663   16645 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0912 21:29:56.077378   16645 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0912 21:29:56.078484   16645 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 21:29:56.078583   16645 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:29:56.078597   16645 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0912 21:29:56.078676   16645 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0912 21:29:56.078714   16645 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0912 21:29:56.124240   16645 cni.go:84] Creating CNI manager for ""
	I0912 21:29:56.124263   16645 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:29:56.124272   16645 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 21:29:56.124294   16645 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 21:29:56.124454   16645 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 21:29:56.124523   16645 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 21:29:56.133964   16645 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0912 21:29:56.134006   16645 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0912 21:29:56.141475   16645 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0912 21:29:56.141472   16645 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0912 21:29:56.141475   16645 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0912 21:29:56.141536   16645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5765/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0912 21:29:56.141548   16645 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:29:56.141567   16645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5765/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0912 21:29:56.153270   16645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5765/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0912 21:29:56.190873   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1021226317 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0912 21:29:56.199929   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3110504930 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0912 21:29:56.225405   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube148219347 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0912 21:29:56.288849   16645 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 21:29:56.297480   16645 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0912 21:29:56.297501   16645 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0912 21:29:56.297536   16645 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0912 21:29:56.304602   16645 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0912 21:29:56.304725   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube41157484 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0912 21:29:56.312434   16645 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0912 21:29:56.312457   16645 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0912 21:29:56.312491   16645 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0912 21:29:56.320053   16645 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:29:56.320173   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4138672360 /lib/systemd/system/kubelet.service
	I0912 21:29:56.327817   16645 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0912 21:29:56.327916   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3945835595 /var/tmp/minikube/kubeadm.yaml.new
	I0912 21:29:56.336105   16645 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0912 21:29:56.337289   16645 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0912 21:29:56.551247   16645 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0912 21:29:56.566010   16645 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube for IP: 10.138.0.48
	I0912 21:29:56.566039   16645 certs.go:194] generating shared ca certs ...
	I0912 21:29:56.566062   16645 certs.go:226] acquiring lock for ca certs: {Name:mk41095faacb8d6acdb44c4a2359d4a47db087eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:56.566250   16645 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-5765/.minikube/ca.key
	I0912 21:29:56.566307   16645 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-5765/.minikube/proxy-client-ca.key
	I0912 21:29:56.566321   16645 certs.go:256] generating profile certs ...
	I0912 21:29:56.566405   16645 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/client.key
	I0912 21:29:56.566425   16645 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/client.crt with IP's: []
	I0912 21:29:57.118163   16645 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/client.crt ...
	I0912 21:29:57.118196   16645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/client.crt: {Name:mkcff4ad4a10dd953ffabac3058ac98ff51a75b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:57.118331   16645 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/client.key ...
	I0912 21:29:57.118341   16645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/client.key: {Name:mk91e7ae1bf30cc33254d6660e2348e12a366dcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:57.118441   16645 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0912 21:29:57.118460   16645 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0912 21:29:57.710690   16645 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0912 21:29:57.710725   16645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mkdacb9b1ea23c81d1676062b25ad3807393d881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:57.710859   16645 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0912 21:29:57.710869   16645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mk080b99f2866ec5c7a69a251440294c596500f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:57.710920   16645 certs.go:381] copying /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/apiserver.crt
	I0912 21:29:57.710989   16645 certs.go:385] copying /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/apiserver.key
	I0912 21:29:57.711047   16645 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/proxy-client.key
	I0912 21:29:57.711061   16645 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0912 21:29:57.833104   16645 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/proxy-client.crt ...
	I0912 21:29:57.833131   16645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/proxy-client.crt: {Name:mkfd78647103101354d4c41510dcee7c3c4b18a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:57.833256   16645 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/proxy-client.key ...
	I0912 21:29:57.833267   16645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/proxy-client.key: {Name:mkda05f273c88ac15ee1691adce968d0571fd124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:29:57.833449   16645 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5765/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 21:29:57.833484   16645 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5765/.minikube/certs/ca.pem (1078 bytes)
	I0912 21:29:57.833507   16645 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5765/.minikube/certs/cert.pem (1123 bytes)
	I0912 21:29:57.833538   16645 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-5765/.minikube/certs/key.pem (1679 bytes)
	I0912 21:29:57.834062   16645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5765/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:29:57.834172   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4217274557 /var/lib/minikube/certs/ca.crt
	I0912 21:29:57.842528   16645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5765/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0912 21:29:57.842641   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube85101805 /var/lib/minikube/certs/ca.key
	I0912 21:29:57.851031   16645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5765/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:29:57.851142   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube54334548 /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 21:29:57.859455   16645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5765/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0912 21:29:57.859557   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1831887736 /var/lib/minikube/certs/proxy-client-ca.key
	I0912 21:29:57.866916   16645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0912 21:29:57.867052   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube689507648 /var/lib/minikube/certs/apiserver.crt
	I0912 21:29:57.874139   16645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 21:29:57.874241   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1550629939 /var/lib/minikube/certs/apiserver.key
	I0912 21:29:57.881266   16645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:29:57.881370   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3690201855 /var/lib/minikube/certs/proxy-client.crt
	I0912 21:29:57.888400   16645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5765/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 21:29:57.888518   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1403829567 /var/lib/minikube/certs/proxy-client.key
	I0912 21:29:57.896433   16645 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0912 21:29:57.896448   16645 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:29:57.896476   16645 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:29:57.903219   16645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-5765/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:29:57.903322   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube53692668 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:29:57.910580   16645 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 21:29:57.910688   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1513013385 /var/lib/minikube/kubeconfig
	I0912 21:29:57.919171   16645 exec_runner.go:51] Run: openssl version
	I0912 21:29:57.921896   16645 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:29:57.930137   16645 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:29:57.931421   16645 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 12 21:29 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:29:57.931462   16645 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:29:57.934175   16645 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:29:57.942283   16645 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 21:29:57.943346   16645 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 21:29:57.943387   16645 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:29:57.943490   16645 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 21:29:57.958996   16645 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 21:29:57.967625   16645 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 21:29:57.975690   16645 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0912 21:29:57.995215   16645 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 21:29:58.004757   16645 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 21:29:58.004784   16645 kubeadm.go:157] found existing configuration files:
	
	I0912 21:29:58.004826   16645 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 21:29:58.013597   16645 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 21:29:58.013647   16645 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 21:29:58.021026   16645 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 21:29:58.029397   16645 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 21:29:58.029443   16645 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 21:29:58.037710   16645 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 21:29:58.046828   16645 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 21:29:58.046878   16645 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 21:29:58.054519   16645 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 21:29:58.062924   16645 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 21:29:58.062970   16645 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 21:29:58.074017   16645 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 21:29:58.105110   16645 kubeadm.go:310] W0912 21:29:58.104999   17548 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:29:58.105590   16645 kubeadm.go:310] W0912 21:29:58.105548   17548 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:29:58.107148   16645 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 21:29:58.107218   16645 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 21:29:58.201206   16645 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 21:29:58.201284   16645 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 21:29:58.201292   16645 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 21:29:58.201297   16645 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 21:29:58.212080   16645 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 21:29:58.214917   16645 out.go:235]   - Generating certificates and keys ...
	I0912 21:29:58.214962   16645 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 21:29:58.214977   16645 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 21:29:58.633103   16645 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 21:29:58.721486   16645 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0912 21:29:58.882083   16645 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0912 21:29:59.095640   16645 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0912 21:29:59.192972   16645 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0912 21:29:59.193050   16645 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0912 21:29:59.304690   16645 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0912 21:29:59.304782   16645 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0912 21:29:59.374889   16645 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 21:29:59.437328   16645 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 21:29:59.534312   16645 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0912 21:29:59.534531   16645 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 21:29:59.718226   16645 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 21:29:59.943353   16645 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 21:30:00.028253   16645 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 21:30:00.144787   16645 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 21:30:00.512857   16645 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 21:30:00.513346   16645 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 21:30:00.515539   16645 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 21:30:00.517567   16645 out.go:235]   - Booting up control plane ...
	I0912 21:30:00.517591   16645 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 21:30:00.517607   16645 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 21:30:00.517613   16645 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 21:30:00.539505   16645 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 21:30:00.543881   16645 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 21:30:00.543911   16645 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 21:30:00.775080   16645 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 21:30:00.775102   16645 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 21:30:01.276274   16645 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.204575ms
	I0912 21:30:01.276296   16645 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 21:30:05.278250   16645 kubeadm.go:310] [api-check] The API server is healthy after 4.001951526s
	I0912 21:30:05.290547   16645 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 21:30:05.300725   16645 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 21:30:05.317541   16645 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 21:30:05.317563   16645 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 21:30:05.325234   16645 kubeadm.go:310] [bootstrap-token] Using token: wgfqov.p8zqhawkz3rnqk34
	I0912 21:30:05.326577   16645 out.go:235]   - Configuring RBAC rules ...
	I0912 21:30:05.326607   16645 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 21:30:05.329640   16645 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 21:30:05.334669   16645 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 21:30:05.337449   16645 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 21:30:05.339681   16645 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 21:30:05.341837   16645 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 21:30:05.684763   16645 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 21:30:06.115494   16645 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 21:30:06.684515   16645 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 21:30:06.685767   16645 kubeadm.go:310] 
	I0912 21:30:06.685790   16645 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 21:30:06.685796   16645 kubeadm.go:310] 
	I0912 21:30:06.685801   16645 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 21:30:06.685806   16645 kubeadm.go:310] 
	I0912 21:30:06.685810   16645 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 21:30:06.685814   16645 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 21:30:06.685819   16645 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 21:30:06.685823   16645 kubeadm.go:310] 
	I0912 21:30:06.685828   16645 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 21:30:06.685832   16645 kubeadm.go:310] 
	I0912 21:30:06.685838   16645 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 21:30:06.685842   16645 kubeadm.go:310] 
	I0912 21:30:06.685847   16645 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 21:30:06.685857   16645 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 21:30:06.685862   16645 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 21:30:06.685866   16645 kubeadm.go:310] 
	I0912 21:30:06.685872   16645 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 21:30:06.685881   16645 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 21:30:06.685886   16645 kubeadm.go:310] 
	I0912 21:30:06.685891   16645 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wgfqov.p8zqhawkz3rnqk34 \
	I0912 21:30:06.685897   16645 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d78fd8f605c1f7c3a5ff0c7d8116916173b65eb1d6e1cb8fa139c454c1849aa \
	I0912 21:30:06.685902   16645 kubeadm.go:310] 	--control-plane 
	I0912 21:30:06.685906   16645 kubeadm.go:310] 
	I0912 21:30:06.685911   16645 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 21:30:06.685916   16645 kubeadm.go:310] 
	I0912 21:30:06.685922   16645 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wgfqov.p8zqhawkz3rnqk34 \
	I0912 21:30:06.685927   16645 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d78fd8f605c1f7c3a5ff0c7d8116916173b65eb1d6e1cb8fa139c454c1849aa 
	I0912 21:30:06.688772   16645 cni.go:84] Creating CNI manager for ""
	I0912 21:30:06.688796   16645 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:30:06.690547   16645 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 21:30:06.691695   16645 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0912 21:30:06.701756   16645 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 21:30:06.701898   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube526977983 /etc/cni/net.d/1-k8s.conflist
	I0912 21:30:06.712406   16645 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 21:30:06.712450   16645 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:06.712485   16645 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_12T21_30_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0912 21:30:06.721623   16645 ops.go:34] apiserver oom_adj: -16
	I0912 21:30:06.778131   16645 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:07.278867   16645 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:07.779002   16645 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:08.278963   16645 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:08.779191   16645 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:09.279027   16645 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:09.778480   16645 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:10.278723   16645 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:10.778682   16645 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:11.279885   16645 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:30:11.344084   16645 kubeadm.go:1113] duration metric: took 4.631686079s to wait for elevateKubeSystemPrivileges
	I0912 21:30:11.344125   16645 kubeadm.go:394] duration metric: took 13.400740765s to StartCluster
	I0912 21:30:11.344148   16645 settings.go:142] acquiring lock: {Name:mk2fd9998f328cafcc37599cc8db8e2bcc2cffac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:11.344219   16645 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-5765/kubeconfig
	I0912 21:30:11.344860   16645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-5765/kubeconfig: {Name:mk80a8972afb3ddcb66c934dfb2224db2ae8e054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:30:11.345078   16645 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 21:30:11.345159   16645 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0912 21:30:11.345276   16645 addons.go:69] Setting yakd=true in profile "minikube"
	I0912 21:30:11.345290   16645 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0912 21:30:11.345305   16645 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0912 21:30:11.345318   16645 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:30:11.345324   16645 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0912 21:30:11.345336   16645 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0912 21:30:11.345338   16645 mustload.go:65] Loading cluster: minikube
	I0912 21:30:11.345353   16645 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0912 21:30:11.345355   16645 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0912 21:30:11.345361   16645 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0912 21:30:11.345369   16645 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0912 21:30:11.345377   16645 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0912 21:30:11.345384   16645 addons.go:69] Setting volcano=true in profile "minikube"
	I0912 21:30:11.345311   16645 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0912 21:30:11.345398   16645 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0912 21:30:11.345405   16645 addons.go:69] Setting registry=true in profile "minikube"
	I0912 21:30:11.345407   16645 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0912 21:30:11.345354   16645 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0912 21:30:11.345417   16645 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0912 21:30:11.345423   16645 addons.go:234] Setting addon registry=true in "minikube"
	I0912 21:30:11.345427   16645 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0912 21:30:11.345444   16645 host.go:66] Checking if "minikube" exists ...
	I0912 21:30:11.345450   16645 host.go:66] Checking if "minikube" exists ...
	I0912 21:30:11.345387   16645 host.go:66] Checking if "minikube" exists ...
	I0912 21:30:11.345397   16645 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0912 21:30:11.345508   16645 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0912 21:30:11.345530   16645 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:30:11.345409   16645 addons.go:234] Setting addon volcano=true in "minikube"
	I0912 21:30:11.345566   16645 host.go:66] Checking if "minikube" exists ...
	I0912 21:30:11.345445   16645 host.go:66] Checking if "minikube" exists ...
	I0912 21:30:11.345990   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.346011   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.346036   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.345538   16645 host.go:66] Checking if "minikube" exists ...
	I0912 21:30:11.346056   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.346063   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.346087   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.346115   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:11.346137   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.346153   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.345387   16645 host.go:66] Checking if "minikube" exists ...
	I0912 21:30:11.346196   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:11.346046   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:11.346655   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.346670   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.346675   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.346684   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.346159   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.346714   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:11.345319   16645 addons.go:234] Setting addon yakd=true in "minikube"
	I0912 21:30:11.346720   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.346725   16645 out.go:177] * Configuring local host environment ...
	I0912 21:30:11.346739   16645 host.go:66] Checking if "minikube" exists ...
	I0912 21:30:11.346747   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:11.346903   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.346919   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.345390   16645 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0912 21:30:11.346956   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:11.345399   16645 host.go:66] Checking if "minikube" exists ...
	I0912 21:30:11.346981   16645 host.go:66] Checking if "minikube" exists ...
	I0912 21:30:11.346059   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.347402   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.347414   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.347428   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:11.347438   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:11.347568   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.347582   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.347619   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:11.347633   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.347657   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.347692   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 21:30:11.350323   16645 out.go:270] * 
	W0912 21:30:11.350378   16645 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0912 21:30:11.350389   16645 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0912 21:30:11.350405   16645 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0912 21:30:11.350421   16645 out.go:270] * 
	W0912 21:30:11.350470   16645 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0912 21:30:11.350485   16645 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0912 21:30:11.350492   16645 out.go:270] * 
	W0912 21:30:11.350517   16645 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0912 21:30:11.350531   16645 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0912 21:30:11.350539   16645 out.go:270] * 
	W0912 21:30:11.350548   16645 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0912 21:30:11.350578   16645 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 21:30:11.351789   16645 out.go:177] * Verifying Kubernetes components...
	I0912 21:30:11.353183   16645 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0912 21:30:11.346070   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.354122   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:11.345328   16645 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0912 21:30:11.355812   16645 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0912 21:30:11.355899   16645 host.go:66] Checking if "minikube" exists ...
	I0912 21:30:11.346713   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:11.356749   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.356772   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.356804   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:11.345409   16645 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0912 21:30:11.358579   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.358602   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.369709   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:11.372252   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.375141   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.377607   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.378060   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.384346   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.389455   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.392062   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.396291   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.397383   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.404552   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.404605   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.404834   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.404896   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.405178   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.408057   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.408108   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.408126   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.408199   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.408279   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.408715   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.420724   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.421021   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.422029   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.426014   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.438475   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.438497   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.438537   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.438546   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.438711   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.438714   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.438727   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.438751   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.440741   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.440907   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.442504   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.442552   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.442978   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.443023   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.444965   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.445036   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.445207   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.447220   16645 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:30:11.448657   16645 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:30:11.448678   16645 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0912 21:30:11.448685   16645 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:30:11.448721   16645 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:30:11.449476   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.451154   16645 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0912 21:30:11.451339   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.451388   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.452390   16645 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0912 21:30:11.452418   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0912 21:30:11.452542   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1621031062 /etc/kubernetes/addons/deployment.yaml
	I0912 21:30:11.455399   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.455423   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.456026   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.456052   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.456364   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.456381   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.457310   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.457334   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.458010   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.458031   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.459280   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.459296   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.459657   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.459715   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.460854   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.462262   16645 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0912 21:30:11.463484   16645 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0912 21:30:11.463619   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.463657   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.463729   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.464710   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.464728   16645 host.go:66] Checking if "minikube" exists ...
	I0912 21:30:11.465657   16645 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0912 21:30:11.465691   16645 host.go:66] Checking if "minikube" exists ...
	I0912 21:30:11.465955   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.466290   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.467385   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.467422   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:11.466784   16645 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0912 21:30:11.466840   16645 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0912 21:30:11.467272   16645 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0912 21:30:11.469754   16645 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 21:30:11.469788   16645 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 21:30:11.470025   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube458506233 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 21:30:11.470208   16645 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0912 21:30:11.471226   16645 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 21:30:11.471257   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0912 21:30:11.471898   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3002721055 /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 21:30:11.472098   16645 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0912 21:30:11.472120   16645 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0912 21:30:11.472766   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4032922494 /etc/kubernetes/addons/yakd-ns.yaml
	I0912 21:30:11.473155   16645 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0912 21:30:11.474994   16645 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0912 21:30:11.475451   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.475545   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.476537   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.476557   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.477193   16645 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0912 21:30:11.478101   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.478128   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.478663   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 21:30:11.478729   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.478768   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.478848   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1251841574 /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:30:11.481628   16645 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0912 21:30:11.483989   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.484910   16645 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0912 21:30:11.484950   16645 host.go:66] Checking if "minikube" exists ...
	I0912 21:30:11.485474   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.485601   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.485791   16645 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0912 21:30:11.485933   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:11.485953   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:11.485989   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:11.487458   16645 out.go:177]   - Using image docker.io/registry:2.8.3
	I0912 21:30:11.487489   16645 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0912 21:30:11.487541   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.488468   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.488709   16645 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0912 21:30:11.488938   16645 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0912 21:30:11.489982   16645 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0912 21:30:11.490020   16645 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0912 21:30:11.490142   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube23447094 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0912 21:30:11.491394   16645 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0912 21:30:11.491483   16645 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0912 21:30:11.491508   16645 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0912 21:30:11.493464   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3763740415 /etc/kubernetes/addons/yakd-sa.yaml
	I0912 21:30:11.493178   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.493653   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.493278   16645 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0912 21:30:11.493865   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0912 21:30:11.494015   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.494193   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1296115892 /etc/kubernetes/addons/registry-rc.yaml
	I0912 21:30:11.493292   16645 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0912 21:30:11.495708   16645 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0912 21:30:11.495788   16645 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0912 21:30:11.495812   16645 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0912 21:30:11.495942   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube241606548 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0912 21:30:11.496970   16645 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0912 21:30:11.497177   16645 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0912 21:30:11.497307   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1823624863 /etc/kubernetes/addons/ig-namespace.yaml
	I0912 21:30:11.497332   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.498560   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.498793   16645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 21:30:11.498810   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0912 21:30:11.498901   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1523386028 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 21:30:11.499379   16645 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0912 21:30:11.500939   16645 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:30:11.500962   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0912 21:30:11.501066   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube45859307 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:30:11.502727   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.502750   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.503975   16645 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:30:11.509014   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.510670   16645 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0912 21:30:11.512278   16645 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0912 21:30:11.512301   16645 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0912 21:30:11.512433   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1342361984 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0912 21:30:11.513627   16645 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0912 21:30:11.513679   16645 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0912 21:30:11.513806   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3926851646 /etc/kubernetes/addons/yakd-crb.yaml
	I0912 21:30:11.515255   16645 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0912 21:30:11.515282   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0912 21:30:11.515406   16645 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 21:30:11.515423   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3590071757 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0912 21:30:11.516892   16645 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 21:30:11.522299   16645 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0912 21:30:11.522324   16645 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0912 21:30:11.524146   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:11.524569   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2275159394 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0912 21:30:11.528620   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.528658   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.529857   16645 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0912 21:30:11.532057   16645 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0912 21:30:11.532198   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3320342377 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0912 21:30:11.532866   16645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 21:30:11.532894   16645 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 21:30:11.532997   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1435707821 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 21:30:11.533559   16645 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0912 21:30:11.533578   16645 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0912 21:30:11.533651   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3365908441 /etc/kubernetes/addons/registry-svc.yaml
	I0912 21:30:11.542759   16645 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:30:11.547716   16645 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0912 21:30:11.547748   16645 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0912 21:30:11.547891   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1214566400 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0912 21:30:11.550595   16645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:30:11.550622   16645 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 21:30:11.551202   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube224811668 /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:30:11.554511   16645 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0912 21:30:11.554539   16645 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0912 21:30:11.555646   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.555673   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.556434   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1349984661 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0912 21:30:11.565342   16645 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0912 21:30:11.565375   16645 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0912 21:30:11.565647   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3013493239 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0912 21:30:11.566544   16645 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0912 21:30:11.566571   16645 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0912 21:30:11.566807   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1735417714 /etc/kubernetes/addons/ig-role.yaml
	I0912 21:30:11.567126   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.568838   16645 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0912 21:30:11.570544   16645 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:30:11.570570   16645 out.go:177]   - Using image docker.io/busybox:stable
	I0912 21:30:11.570587   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0912 21:30:11.570712   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1223333350 /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:30:11.572238   16645 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:30:11.572266   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0912 21:30:11.572407   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube411384985 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:30:11.572990   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:11.573049   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:11.573348   16645 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:30:11.583151   16645 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0912 21:30:11.583186   16645 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0912 21:30:11.583342   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1132910172 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0912 21:30:11.590189   16645 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0912 21:30:11.590220   16645 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0912 21:30:11.590335   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2654809551 /etc/kubernetes/addons/yakd-svc.yaml
	I0912 21:30:11.591095   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:11.591115   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:11.592208   16645 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:30:11.592230   16645 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0912 21:30:11.592343   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1134483887 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:30:11.592557   16645 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0912 21:30:11.592577   16645 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0912 21:30:11.592692   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1427763223 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0912 21:30:11.595640   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:11.595682   16645 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 21:30:11.595696   16645 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0912 21:30:11.595704   16645 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0912 21:30:11.595741   16645 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0912 21:30:11.601324   16645 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0912 21:30:11.601357   16645 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0912 21:30:11.601477   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1965937902 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0912 21:30:11.603602   16645 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0912 21:30:11.603628   16645 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0912 21:30:11.603742   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3832238955 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0912 21:30:11.604709   16645 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:30:11.609608   16645 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:30:11.609645   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0912 21:30:11.609772   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2208361304 /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:30:11.623412   16645 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:30:11.623882   16645 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0912 21:30:11.623910   16645 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0912 21:30:11.624030   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2882274592 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0912 21:30:11.630132   16645 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 21:30:11.630307   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3193269377 /etc/kubernetes/addons/storageclass.yaml
	I0912 21:30:11.630611   16645 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:30:11.632115   16645 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:30:11.632279   16645 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0912 21:30:11.632299   16645 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0912 21:30:11.632749   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1451255266 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0912 21:30:11.638967   16645 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0912 21:30:11.638994   16645 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0912 21:30:11.639100   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube496043936 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0912 21:30:11.667935   16645 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:11.667971   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0912 21:30:11.668108   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1960033088 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:11.683055   16645 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0912 21:30:11.683104   16645 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0912 21:30:11.683255   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3976955609 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0912 21:30:11.698624   16645 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 21:30:11.724382   16645 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0912 21:30:11.724421   16645 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0912 21:30:11.724575   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4126729477 /etc/kubernetes/addons/ig-crd.yaml
	I0912 21:30:11.778672   16645 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0912 21:30:11.832990   16645 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0912 21:30:11.833025   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0912 21:30:11.833163   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3222507776 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0912 21:30:11.833307   16645 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:30:11.833332   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0912 21:30:11.833468   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3208843233 /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:30:11.856838   16645 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:11.961381   16645 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0912 21:30:11.964751   16645 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0912 21:30:11.964772   16645 node_ready.go:38] duration metric: took 3.361075ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0912 21:30:11.964782   16645 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:30:11.973247   16645 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:30:11.977946   16645 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-522m5" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:12.002422   16645 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0912 21:30:12.002462   16645 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0912 21:30:12.002604   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4196953796 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0912 21:30:12.099910   16645 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0912 21:30:12.099944   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0912 21:30:12.100074   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2778460680 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0912 21:30:12.175010   16645 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0912 21:30:12.247684   16645 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0912 21:30:12.247732   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0912 21:30:12.247903   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2376857663 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0912 21:30:12.277546   16645 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:30:12.277586   16645 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0912 21:30:12.277734   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1613987298 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:30:12.468857   16645 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:30:12.657073   16645 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.024895717s)
	I0912 21:30:12.690177   16645 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.186169136s)
	I0912 21:30:12.697558   16645 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0912 21:30:12.749537   16645 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.126072469s)
	I0912 21:30:12.749571   16645 addons.go:475] Verifying addon registry=true in "minikube"
	I0912 21:30:12.769000   16645 out.go:177] * Verifying registry addon...
	I0912 21:30:12.770243   16645 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.19685348s)
	I0912 21:30:12.770277   16645 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0912 21:30:12.771849   16645 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0912 21:30:12.788178   16645 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0912 21:30:12.788202   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:12.922565   16645 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.291743491s)
	I0912 21:30:12.930536   16645 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0912 21:30:13.029527   16645 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.424773993s)
	I0912 21:30:13.235645   16645 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.262334807s)
	I0912 21:30:13.278158   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:13.715094   16645 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.858191622s)
	W0912 21:30:13.715135   16645 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:30:13.715162   16645 retry.go:31] will retry after 315.300002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:30:13.775632   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:13.985284   16645 pod_ready.go:103] pod "coredns-7c65d6cfc9-522m5" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:14.031338   16645 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:30:14.281558   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:14.492129   16645 pod_ready.go:93] pod "coredns-7c65d6cfc9-522m5" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:14.492161   16645 pod_ready.go:82] duration metric: took 2.514121436s for pod "coredns-7c65d6cfc9-522m5" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:14.492176   16645 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dr57d" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:14.626568   16645 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.111120417s)
	I0912 21:30:14.776515   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:14.958123   16645 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.489197463s)
	I0912 21:30:14.958161   16645 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0912 21:30:14.960097   16645 out.go:177] * Verifying csi-hostpath-driver addon...
	I0912 21:30:14.967720   16645 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0912 21:30:15.003176   16645 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0912 21:30:15.003202   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:15.325963   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:15.473376   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:15.775837   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:15.972254   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:16.275550   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:16.472238   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:16.497371   16645 pod_ready.go:103] pod "coredns-7c65d6cfc9-dr57d" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:16.776208   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:16.962580   16645 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.931182764s)
	I0912 21:30:16.973465   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:17.275575   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:17.472418   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:17.499656   16645 pod_ready.go:93] pod "coredns-7c65d6cfc9-dr57d" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:17.499678   16645 pod_ready.go:82] duration metric: took 3.007494248s for pod "coredns-7c65d6cfc9-dr57d" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:17.499688   16645 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:17.776675   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:17.971964   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:18.276022   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:18.473362   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:18.474615   16645 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0912 21:30:18.474740   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube686678415 /var/lib/minikube/google_application_credentials.json
	I0912 21:30:18.486775   16645 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0912 21:30:18.486923   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3631773729 /var/lib/minikube/google_cloud_project
	I0912 21:30:18.497047   16645 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0912 21:30:18.497111   16645 host.go:66] Checking if "minikube" exists ...
	I0912 21:30:18.497604   16645 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0912 21:30:18.497623   16645 api_server.go:166] Checking apiserver status ...
	I0912 21:30:18.497647   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:18.516548   16645 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17951/cgroup
	I0912 21:30:18.528031   16645 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da"
	I0912 21:30:18.528090   16645 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/cfa15392efa8ff37eb6596bef0562b55f339f7a8e5dac5bbdec024eb917613da/freezer.state
	I0912 21:30:18.536846   16645 api_server.go:204] freezer state: "THAWED"
	I0912 21:30:18.536871   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:18.541277   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:18.541335   16645 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0912 21:30:18.564378   16645 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:30:18.567701   16645 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0912 21:30:18.591217   16645 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0912 21:30:18.591290   16645 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0912 21:30:18.591436   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1329485944 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0912 21:30:18.605885   16645 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0912 21:30:18.605926   16645 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0912 21:30:18.606057   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube658230265 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0912 21:30:18.616143   16645 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:30:18.616172   16645 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0912 21:30:18.616298   16645 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1841318164 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:30:18.627392   16645 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:30:18.775854   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:19.010413   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:19.281934   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:19.349104   16645 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0912 21:30:19.350537   16645 out.go:177] * Verifying gcp-auth addon...
	I0912 21:30:19.354544   16645 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0912 21:30:19.383461   16645 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:30:19.473892   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:19.505147   16645 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0912 21:30:19.775773   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:19.972359   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:20.275327   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:20.471921   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:20.505025   16645 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:20.505063   16645 pod_ready.go:82] duration metric: took 3.005366958s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:20.505077   16645 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:20.509608   16645 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:20.509628   16645 pod_ready.go:82] duration metric: took 4.543752ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:20.509637   16645 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:20.514027   16645 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:20.514045   16645 pod_ready.go:82] duration metric: took 4.402022ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:20.514056   16645 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wm2hw" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:20.518308   16645 pod_ready.go:93] pod "kube-proxy-wm2hw" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:20.518330   16645 pod_ready.go:82] duration metric: took 4.266794ms for pod "kube-proxy-wm2hw" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:20.518341   16645 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:20.775495   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:20.972197   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:21.078269   16645 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0912 21:30:21.078291   16645 pod_ready.go:82] duration metric: took 559.922901ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0912 21:30:21.078298   16645 pod_ready.go:39] duration metric: took 9.113503801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:30:21.078321   16645 api_server.go:52] waiting for apiserver process to appear ...
	I0912 21:30:21.078397   16645 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:30:21.093587   16645 api_server.go:72] duration metric: took 9.742980579s to wait for apiserver process to appear ...
	I0912 21:30:21.093609   16645 api_server.go:88] waiting for apiserver healthz status ...
	I0912 21:30:21.093625   16645 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0912 21:30:21.096968   16645 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0912 21:30:21.097742   16645 api_server.go:141] control plane version: v1.31.1
	I0912 21:30:21.097761   16645 api_server.go:131] duration metric: took 4.146919ms to wait for apiserver health ...
	I0912 21:30:21.097768   16645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 21:30:21.106831   16645 system_pods.go:59] 17 kube-system pods found
	I0912 21:30:21.106861   16645 system_pods.go:61] "coredns-7c65d6cfc9-dr57d" [5847b725-91ec-4a43-90e7-e20e6f4526da] Running
	I0912 21:30:21.106870   16645 system_pods.go:61] "csi-hostpath-attacher-0" [a4f5506a-b927-4cb4-a5eb-f0b01899c91b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:30:21.106876   16645 system_pods.go:61] "csi-hostpath-resizer-0" [a99dd8f3-4ba2-4a13-a71f-07242dd1165d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:30:21.106885   16645 system_pods.go:61] "csi-hostpathplugin-qgjqz" [c1880015-74bb-4402-8312-379e2b620ec2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:30:21.106889   16645 system_pods.go:61] "etcd-ubuntu-20-agent-2" [27772db8-e8ed-477a-813e-ba9d17774144] Running
	I0912 21:30:21.106893   16645 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [a6270f33-1045-4fdd-87d5-f7ab72ccd084] Running
	I0912 21:30:21.106899   16645 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [447142c8-dc46-4a26-9903-73239ad9e2ee] Running
	I0912 21:30:21.106902   16645 system_pods.go:61] "kube-proxy-wm2hw" [e1ad6fa9-4948-4894-891a-fad6a33df89a] Running
	I0912 21:30:21.106906   16645 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [1bd48cdd-3d13-446e-80f8-92fe7f306131] Running
	I0912 21:30:21.106911   16645 system_pods.go:61] "metrics-server-84c5f94fbc-5lgxq" [1f086edc-d734-40a7-8336-4b99ca2ef54e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:30:21.106917   16645 system_pods.go:61] "nvidia-device-plugin-daemonset-x2pzq" [bc25e507-54d5-4d43-81b2-c203b640ba13] Running
	I0912 21:30:21.106923   16645 system_pods.go:61] "registry-66c9cd494c-2mldr" [d87e815f-a8f5-4d9c-921b-fdc6c76b6645] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 21:30:21.106929   16645 system_pods.go:61] "registry-proxy-g4qfg" [7bb49b8b-92a4-44db-b384-d1ec8357b811] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:30:21.106936   16645 system_pods.go:61] "snapshot-controller-56fcc65765-bh492" [948ba51a-6c8d-4082-8472-daaba3f80ca6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:21.106952   16645 system_pods.go:61] "snapshot-controller-56fcc65765-mf2nj" [f40518da-d1f4-40fb-97e6-462368245783] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:21.106959   16645 system_pods.go:61] "storage-provisioner" [589a72d3-1ae2-4203-a9c1-21d67b63ca7b] Running
	I0912 21:30:21.106965   16645 system_pods.go:61] "tiller-deploy-b48cc5f79-q5z94" [9be4f6aa-1461-4e08-87ef-af8cdb39e9d3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:30:21.106973   16645 system_pods.go:74] duration metric: took 9.199269ms to wait for pod list to return data ...
	I0912 21:30:21.106980   16645 default_sa.go:34] waiting for default service account to be created ...
	I0912 21:30:21.276517   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:21.302317   16645 default_sa.go:45] found service account: "default"
	I0912 21:30:21.302420   16645 default_sa.go:55] duration metric: took 195.420047ms for default service account to be created ...
	I0912 21:30:21.302447   16645 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 21:30:21.471723   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:21.507384   16645 system_pods.go:86] 17 kube-system pods found
	I0912 21:30:21.507407   16645 system_pods.go:89] "coredns-7c65d6cfc9-dr57d" [5847b725-91ec-4a43-90e7-e20e6f4526da] Running
	I0912 21:30:21.507418   16645 system_pods.go:89] "csi-hostpath-attacher-0" [a4f5506a-b927-4cb4-a5eb-f0b01899c91b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:30:21.507427   16645 system_pods.go:89] "csi-hostpath-resizer-0" [a99dd8f3-4ba2-4a13-a71f-07242dd1165d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:30:21.507437   16645 system_pods.go:89] "csi-hostpathplugin-qgjqz" [c1880015-74bb-4402-8312-379e2b620ec2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:30:21.507443   16645 system_pods.go:89] "etcd-ubuntu-20-agent-2" [27772db8-e8ed-477a-813e-ba9d17774144] Running
	I0912 21:30:21.507454   16645 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [a6270f33-1045-4fdd-87d5-f7ab72ccd084] Running
	I0912 21:30:21.507465   16645 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [447142c8-dc46-4a26-9903-73239ad9e2ee] Running
	I0912 21:30:21.507470   16645 system_pods.go:89] "kube-proxy-wm2hw" [e1ad6fa9-4948-4894-891a-fad6a33df89a] Running
	I0912 21:30:21.507476   16645 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [1bd48cdd-3d13-446e-80f8-92fe7f306131] Running
	I0912 21:30:21.507481   16645 system_pods.go:89] "metrics-server-84c5f94fbc-5lgxq" [1f086edc-d734-40a7-8336-4b99ca2ef54e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:30:21.507488   16645 system_pods.go:89] "nvidia-device-plugin-daemonset-x2pzq" [bc25e507-54d5-4d43-81b2-c203b640ba13] Running
	I0912 21:30:21.507494   16645 system_pods.go:89] "registry-66c9cd494c-2mldr" [d87e815f-a8f5-4d9c-921b-fdc6c76b6645] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 21:30:21.507502   16645 system_pods.go:89] "registry-proxy-g4qfg" [7bb49b8b-92a4-44db-b384-d1ec8357b811] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:30:21.507508   16645 system_pods.go:89] "snapshot-controller-56fcc65765-bh492" [948ba51a-6c8d-4082-8472-daaba3f80ca6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:21.507517   16645 system_pods.go:89] "snapshot-controller-56fcc65765-mf2nj" [f40518da-d1f4-40fb-97e6-462368245783] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:30:21.507523   16645 system_pods.go:89] "storage-provisioner" [589a72d3-1ae2-4203-a9c1-21d67b63ca7b] Running
	I0912 21:30:21.507535   16645 system_pods.go:89] "tiller-deploy-b48cc5f79-q5z94" [9be4f6aa-1461-4e08-87ef-af8cdb39e9d3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:30:21.507547   16645 system_pods.go:126] duration metric: took 205.082335ms to wait for k8s-apps to be running ...
	I0912 21:30:21.507560   16645 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 21:30:21.507609   16645 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:30:21.519053   16645 system_svc.go:56] duration metric: took 11.468955ms WaitForService to wait for kubelet
	I0912 21:30:21.519078   16645 kubeadm.go:582] duration metric: took 10.168475451s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:30:21.519094   16645 node_conditions.go:102] verifying NodePressure condition ...
	I0912 21:30:21.703268   16645 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0912 21:30:21.703291   16645 node_conditions.go:123] node cpu capacity is 8
	I0912 21:30:21.703302   16645 node_conditions.go:105] duration metric: took 184.203349ms to run NodePressure ...
	I0912 21:30:21.703314   16645 start.go:241] waiting for startup goroutines ...
	I0912 21:30:21.775002   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:21.972575   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:22.275370   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:22.472192   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:22.775973   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:22.972079   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:23.275889   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:23.473177   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:23.775595   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:23.972444   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:24.275358   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:24.472109   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:24.775509   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:24.973119   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:25.275720   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:25.471919   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:25.774667   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:25.971723   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:26.275078   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:26.476855   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:26.775727   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:26.971483   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:27.276239   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:27.472647   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:27.776083   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:27.972605   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:28.276147   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:28.471315   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:28.776359   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:28.971809   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:29.274820   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:29.473358   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:29.775266   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:29.972909   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:30.276311   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:30.472174   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:30.775703   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:30.971963   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:31.275259   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:31.472323   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:31.775834   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:31.995286   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:32.275557   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:32.472214   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:32.775772   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:32.972913   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:33.275838   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:33.477445   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:33.775166   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:33.973997   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:34.275540   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:34.477615   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:34.776430   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:34.972219   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:35.275749   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:35.472207   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:35.775861   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:35.972732   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:36.275793   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:36.472643   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:36.776120   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:36.985868   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:37.276465   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:30:37.472831   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:37.776176   16645 kapi.go:107] duration metric: took 25.004328046s to wait for kubernetes.io/minikube-addons=registry ...
	I0912 21:30:37.978495   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:38.472157   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:38.973185   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:39.472622   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:39.972895   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:40.471994   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:40.971659   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:41.472753   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:41.971902   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:42.485095   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:42.971803   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:43.472581   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:44.047676   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:44.472539   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:45.013280   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:45.472408   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:45.972011   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:46.472472   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:46.973170   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:47.472092   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:47.971824   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:48.472654   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:48.971734   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:49.471893   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:49.972162   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:50.471654   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:50.972032   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:51.471150   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:51.972527   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:30:52.472008   16645 kapi.go:107] duration metric: took 37.504286871s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0912 21:31:00.857911   16645 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:31:00.857932   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:01.358003   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:01.857887   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:02.357998   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:02.857806   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:03.357939   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:03.857967   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:04.358106   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:04.857710   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:05.357907   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:05.857962   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:06.357682   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:06.857768   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:07.357993   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:07.858070   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:08.357683   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:08.857542   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:09.358388   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:09.858113   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:10.358611   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:10.857962   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:11.357718   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:11.857682   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:12.358418   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:12.858444   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:13.358123   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:13.858102   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:14.358332   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:14.858473   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:15.358570   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:15.858552   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:16.358283   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:16.857973   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:17.357621   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:17.858443   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:18.358442   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:18.857254   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:19.367980   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:19.857820   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:20.357608   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:20.857351   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:21.358664   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:21.857534   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:22.358404   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:22.858820   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:23.358419   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:23.858417   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:24.358505   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:24.858150   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:25.358430   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:25.857988   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:26.358062   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:26.858462   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:27.357864   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:27.857816   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:28.358008   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:28.857949   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:29.358129   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:29.857908   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:30.358150   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:30.857950   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:31.357814   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:31.857806   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:32.357781   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:32.857901   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:33.358162   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:33.858031   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:34.357900   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:34.859898   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:35.357396   16645 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:31:35.858689   16645 kapi.go:107] duration metric: took 1m16.504141562s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0912 21:31:35.869486   16645 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0912 21:31:35.870961   16645 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0912 21:31:35.872269   16645 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0912 21:31:35.873610   16645 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, default-storageclass, helm-tiller, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0912 21:31:35.874824   16645 addons.go:510] duration metric: took 1m24.529677922s for enable addons: enabled=[cloud-spanner nvidia-device-plugin default-storageclass helm-tiller storage-provisioner metrics-server yakd storage-provisioner-rancher inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0912 21:31:35.874869   16645 start.go:246] waiting for cluster config update ...
	I0912 21:31:35.874895   16645 start.go:255] writing updated cluster config ...
	I0912 21:31:35.875162   16645 exec_runner.go:51] Run: rm -f paused
	I0912 21:31:35.920293   16645 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 21:31:35.922051   16645 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Wed 2024-07-31 19:05:20 UTC, end at Thu 2024-09-12 21:41:26 UTC. --
	Sep 12 21:33:54 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:33:54.615283979Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 12 21:33:54 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:33:54.615332522Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 12 21:33:54 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:33:54.617358957Z" level=error msg="Error running exec 80d1e03f638740fe262249aa345614401984d1a0560c98e0a3c6d82ac0d8b71f in container: OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Sep 12 21:33:54 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:33:54.635252637Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 12 21:33:54 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:33:54.635302326Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 12 21:33:54 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:33:54.637440573Z" level=error msg="Error running exec 47fb21a73f105bc34276cfcbc9199db8cf946bf0fed18c8db689939fa03aba79 in container: OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Sep 12 21:33:54 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:33:54.837625811Z" level=info msg="ignoring event" container=5fc6d6a138264a9b29251f02d73fd587396317d2846eb2384fec8046a091752e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:35:12 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:35:12.263600509Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 12 21:35:12 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:35:12.265888996Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 12 21:36:36 ubuntu-20-agent-2 cri-dockerd[17206]: time="2024-09-12T21:36:36Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 12 21:36:37 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:36:37.713354349Z" level=info msg="ignoring event" container=2979c96985c7257b13ad240fe40c2588c62581959f297a6e2be6b017205fe66c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:37:55 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:37:55.255799606Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 12 21:37:55 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:37:55.257897715Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 12 21:40:26 ubuntu-20-agent-2 cri-dockerd[17206]: time="2024-09-12T21:40:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b46ba2d772ed16fb8fea0208cfbfa6f4340629ac52199cf74702382cd9d6fd9f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 12 21:40:26 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:40:26.331382023Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 12 21:40:26 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:40:26.333445108Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 12 21:40:41 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:40:41.250882391Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 12 21:40:41 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:40:41.252932136Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 12 21:41:05 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:41:05.255430635Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 12 21:41:05 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:41:05.257850361Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 12 21:41:25 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:41:25.776097259Z" level=info msg="ignoring event" container=b46ba2d772ed16fb8fea0208cfbfa6f4340629ac52199cf74702382cd9d6fd9f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:41:26 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:41:26.039925443Z" level=info msg="ignoring event" container=cc13bc76cc8f4dc41fd00e9db486b7d300e867a9863ae9a0acbea858c47801d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:41:26 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:41:26.102485637Z" level=info msg="ignoring event" container=8691f6fa365cebfc79b93e2c681b9561bfe2a6f01a5f5d3084a3ea56f900c349 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:41:26 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:41:26.199250506Z" level=info msg="ignoring event" container=d8239e9e710243283407880c3e513cd7a095f65ba773bf2e32b07ac2b1a98bde module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:41:26 ubuntu-20-agent-2 dockerd[16878]: time="2024-09-12T21:41:26.269068916Z" level=info msg="ignoring event" container=a202cb0e3aa3a25865e58caef9f8a87ee0e8ef641aa62c28b6c7ea2e72f6b328 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	2979c96985c72       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            4 minutes ago       Exited              gadget                                   6                   4193a34f82bd9       gadget-vxmdx
	a7125946f5af3       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   8a652f568a2f7       gcp-auth-89d5ffd79-ng69l
	5ca3445f85b90       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   3064113b46e17       csi-hostpathplugin-qgjqz
	2781511b0fdca       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   3064113b46e17       csi-hostpathplugin-qgjqz
	268b6e44a60a5       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   3064113b46e17       csi-hostpathplugin-qgjqz
	7d240fcdfd10f       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   3064113b46e17       csi-hostpathplugin-qgjqz
	f436cb931c61e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   3064113b46e17       csi-hostpathplugin-qgjqz
	edc6729f560de       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   50268d658b8f1       csi-hostpath-attacher-0
	aa80cd3894df7       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   b73fe52eb2240       csi-hostpath-resizer-0
	a3c5b0603a971       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   3064113b46e17       csi-hostpathplugin-qgjqz
	47584ff65b457       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   53cbcc46d4161       snapshot-controller-56fcc65765-mf2nj
	4473870c7ec30       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   abbca25100dfb       snapshot-controller-56fcc65765-bh492
	8691f6fa365ce       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              10 minutes ago      Exited              registry-proxy                           0                   a202cb0e3aa3a       registry-proxy-g4qfg
	f54ff139c32f4       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   9bbb08a5bdcb4       local-path-provisioner-86d989889c-sslhz
	c944d0baacfb4       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   59ea94b4119ac       yakd-dashboard-67d98fc6b-dbbx5
	cc13bc76cc8f4       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             11 minutes ago      Exited              registry                                 0                   d8239e9e71024       registry-66c9cd494c-2mldr
	dd6bd7e17777a       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   103ccebac71d6       metrics-server-84c5f94fbc-5lgxq
	ea2938b8b49e5       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  11 minutes ago      Running             tiller                                   0                   ac73de9f5e2df       tiller-deploy-b48cc5f79-q5z94
	cc1e8278bb4a1       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               11 minutes ago      Running             cloud-spanner-emulator                   0                   a6242049b8583       cloud-spanner-emulator-769b77f747-78dsw
	c4bffa0d5e95a       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   bc1bd4fdfffdd       nvidia-device-plugin-daemonset-x2pzq
	fd7e1ca4c2964       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   ad0278684f73b       storage-provisioner
	a8763ac806c9f       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   9fdced718e029       coredns-7c65d6cfc9-dr57d
	5a71c17bf9b9d       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   4079acfe14abf       kube-proxy-wm2hw
	6e8f72341e421       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   8d454e9a369f7       kube-scheduler-ubuntu-20-agent-2
	4b83d11eb0022       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   30d193c2f247e       etcd-ubuntu-20-agent-2
	6318b9122cba5       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   2fde19ce68ddc       kube-controller-manager-ubuntu-20-agent-2
	cfa15392efa8f       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   6fae5ac5046c6       kube-apiserver-ubuntu-20-agent-2
	
	
	==> coredns [a8763ac806c9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59040 - 42022 "HINFO IN 7753755257241594568.4622555850197705807. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034462745s
	[INFO] 10.244.0.25:51510 - 34717 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000320973s
	[INFO] 10.244.0.25:48467 - 12909 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00041676s
	[INFO] 10.244.0.25:59334 - 41442 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000171417s
	[INFO] 10.244.0.25:46817 - 47305 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00019589s
	[INFO] 10.244.0.25:54282 - 51191 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104919s
	[INFO] 10.244.0.25:40312 - 11256 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000174454s
	[INFO] 10.244.0.25:51782 - 30754 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.002259102s
	[INFO] 10.244.0.25:45379 - 19987 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003237007s
	[INFO] 10.244.0.25:49243 - 33323 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003015861s
	[INFO] 10.244.0.25:50237 - 22813 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003126789s
	[INFO] 10.244.0.25:54467 - 19360 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.001719377s
	[INFO] 10.244.0.25:39454 - 26547 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002649802s
	[INFO] 10.244.0.25:55467 - 26539 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001230079s
	[INFO] 10.244.0.25:34673 - 2828 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001464304s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T21_30_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:30:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 21:41:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 21:37:15 +0000   Thu, 12 Sep 2024 21:30:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 21:37:15 +0000   Thu, 12 Sep 2024 21:30:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 21:37:15 +0000   Thu, 12 Sep 2024 21:30:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 21:37:15 +0000   Thu, 12 Sep 2024 21:30:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    3c1dd8b5-ceb4-438a-bda9-0b001aa171e4
	  Kernel Version:             5.15.0-1067-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-769b77f747-78dsw      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-vxmdx                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-ng69l                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-dr57d                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-qgjqz                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-wm2hw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-5lgxq              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-x2pzq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-bh492         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-mf2nj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 tiller-deploy-b48cc5f79-q5z94                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-sslhz      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-dbbx5               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 2a bd 46 a0 02 08 06
	[  +0.020738] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba b8 59 42 3a ba 08 06
	[  +2.555779] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 4f 27 2d db f2 08 06
	[  +1.640439] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e 8b 71 84 c9 f1 08 06
	[  +2.053173] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a b2 ca a7 e3 a0 08 06
	[  +4.783053] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 ea 5b b1 bc e9 08 06
	[  +0.027268] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 c7 e8 fc 87 ca 08 06
	[  +0.247889] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 72 19 1d b6 d7 08 06
	[  +0.658415] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 46 1c 00 d2 02 08 06
	[Sep12 21:31] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 83 4e ef 29 92 08 06
	[  +0.028235] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a db 5b 94 7a f5 08 06
	[ +11.072765] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 77 2e 1f 27 11 08 06
	[  +0.000472] IPv4: martian source 10.244.0.25 from 10.244.0.5, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5e 03 c0 6d 8b 0e 08 06
	
	
	==> etcd [4b83d11eb002] <==
	{"level":"info","ts":"2024-09-12T21:30:02.921972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
	{"level":"info","ts":"2024-09-12T21:30:02.921990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-12T21:30:02.922006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-12T21:30:02.922016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-12T21:30:02.922913Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T21:30:02.923515Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-12T21:30:02.923523Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T21:30:02.923570Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T21:30:02.923854Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T21:30:02.923872Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-12T21:30:02.923901Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-12T21:30:02.923932Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T21:30:02.923958Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T21:30:02.925276Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T21:30:02.925454Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T21:30:02.926773Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-12T21:30:02.926872Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-12T21:30:19.279844Z","caller":"traceutil/trace.go:171","msg":"trace[887400480] linearizableReadLoop","detail":"{readStateIndex:907; appliedIndex:905; }","duration":"141.658665ms","start":"2024-09-12T21:30:19.138172Z","end":"2024-09-12T21:30:19.279830Z","steps":["trace[887400480] 'read index received'  (duration: 56.884407ms)","trace[887400480] 'applied index is now lower than readState.Index'  (duration: 84.773556ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-12T21:30:19.279893Z","caller":"traceutil/trace.go:171","msg":"trace[389700867] transaction","detail":"{read_only:false; response_revision:885; number_of_response:1; }","duration":"142.828009ms","start":"2024-09-12T21:30:19.137044Z","end":"2024-09-12T21:30:19.279872Z","steps":["trace[389700867] 'process raft request'  (duration: 58.016876ms)","trace[389700867] 'compare'  (duration: 84.574201ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-12T21:30:19.279912Z","caller":"traceutil/trace.go:171","msg":"trace[1758983503] transaction","detail":"{read_only:false; response_revision:886; number_of_response:1; }","duration":"142.645575ms","start":"2024-09-12T21:30:19.137248Z","end":"2024-09-12T21:30:19.279893Z","steps":["trace[1758983503] 'process raft request'  (duration: 142.500769ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-12T21:30:19.280067Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.872974ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gcp-auth\" ","response":"range_response_count:1 size:716"}
	{"level":"info","ts":"2024-09-12T21:30:19.280152Z","caller":"traceutil/trace.go:171","msg":"trace[1481478305] range","detail":"{range_begin:/registry/namespaces/gcp-auth; range_end:; response_count:1; response_revision:886; }","duration":"141.972923ms","start":"2024-09-12T21:30:19.138168Z","end":"2024-09-12T21:30:19.280141Z","steps":["trace[1481478305] 'agreement among raft nodes before linearized reading'  (duration: 141.743627ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-12T21:40:02.945337Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1771}
	{"level":"info","ts":"2024-09-12T21:40:02.967913Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1771,"took":"22.081806ms","hash":2261299915,"current-db-size-bytes":8400896,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":4534272,"current-db-size-in-use":"4.5 MB"}
	{"level":"info","ts":"2024-09-12T21:40:02.967954Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2261299915,"revision":1771,"compact-revision":-1}
	
	
	==> gcp-auth [a7125946f5af] <==
	2024/09/12 21:31:35 GCP Auth Webhook started!
	2024/09/12 21:31:51 Ready to marshal response ...
	2024/09/12 21:31:51 Ready to write response ...
	2024/09/12 21:31:51 Ready to marshal response ...
	2024/09/12 21:31:51 Ready to write response ...
	2024/09/12 21:32:13 Ready to marshal response ...
	2024/09/12 21:32:13 Ready to write response ...
	2024/09/12 21:32:13 Ready to marshal response ...
	2024/09/12 21:32:13 Ready to write response ...
	2024/09/12 21:32:13 Ready to marshal response ...
	2024/09/12 21:32:13 Ready to write response ...
	2024/09/12 21:40:25 Ready to marshal response ...
	2024/09/12 21:40:25 Ready to write response ...
	
	
	==> kernel <==
	 21:41:26 up 23 min,  0 users,  load average: 0.05, 0.15, 0.19
	Linux ubuntu-20-agent-2 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [cfa15392efa8] <==
	W0912 21:30:53.515207       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.6.115:443: connect: connection refused
	W0912 21:30:54.607859       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.6.115:443: connect: connection refused
	W0912 21:31:00.349682       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.73.198:443: connect: connection refused
	E0912 21:31:00.349714       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.73.198:443: connect: connection refused" logger="UnhandledError"
	W0912 21:31:22.369745       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.73.198:443: connect: connection refused
	E0912 21:31:22.369786       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.73.198:443: connect: connection refused" logger="UnhandledError"
	W0912 21:31:22.383399       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.73.198:443: connect: connection refused
	E0912 21:31:22.383541       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.73.198:443: connect: connection refused" logger="UnhandledError"
	I0912 21:31:51.170837       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0912 21:31:51.185695       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0912 21:32:03.568379       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0912 21:32:03.576544       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0912 21:32:03.669943       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0912 21:32:03.676286       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0912 21:32:03.841526       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0912 21:32:03.841700       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0912 21:32:03.852827       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0912 21:32:03.930541       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0912 21:32:04.591976       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0912 21:32:04.743982       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0912 21:32:04.893112       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0912 21:32:04.894067       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0912 21:32:04.931044       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0912 21:32:04.931722       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0912 21:32:05.090406       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [6318b9122cba] <==
	W0912 21:40:17.036337       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:40:17.036383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:40:18.708203       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:40:18.708242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:40:19.923726       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:40:19.923770       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:40:43.134994       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:40:43.135036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:40:45.372397       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:40:45.372436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:40:50.853039       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:40:50.853101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:40:51.968168       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:40:51.968209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:40:52.667267       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:40:52.667312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:01.841264       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:01.841308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:06.620126       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:06.620174       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:19.275747       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:19.275797       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:41:21.467045       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:41:21.467086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0912 21:41:26.004993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="10.84µs"
	
	
	==> kube-proxy [5a71c17bf9b9] <==
	I0912 21:30:12.108119       1 server_linux.go:66] "Using iptables proxy"
	I0912 21:30:12.315948       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0912 21:30:12.316079       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 21:30:12.466573       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0912 21:30:12.466643       1 server_linux.go:169] "Using iptables Proxier"
	I0912 21:30:12.485666       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 21:30:12.486806       1 server.go:483] "Version info" version="v1.31.1"
	I0912 21:30:12.486825       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:30:12.491674       1 config.go:199] "Starting service config controller"
	I0912 21:30:12.491695       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 21:30:12.491730       1 config.go:105] "Starting endpoint slice config controller"
	I0912 21:30:12.491735       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 21:30:12.492341       1 config.go:328] "Starting node config controller"
	I0912 21:30:12.492354       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 21:30:12.594605       1 shared_informer.go:320] Caches are synced for node config
	I0912 21:30:12.594652       1 shared_informer.go:320] Caches are synced for service config
	I0912 21:30:12.594680       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6e8f72341e42] <==
	W0912 21:30:03.824838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0912 21:30:03.824858       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:03.824876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:03.824890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:03.824883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0912 21:30:03.824922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:03.825129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:03.825162       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:03.825310       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 21:30:03.825331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:04.765620       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 21:30:04.765662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:04.783414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 21:30:04.783451       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:04.855768       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:30:04.855804       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 21:30:04.856361       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 21:30:04.856395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:04.931012       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 21:30:04.931055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:04.942477       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 21:30:04.942527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 21:30:04.978965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:30:04.979014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0912 21:30:06.621827       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Wed 2024-07-31 19:05:20 UTC, end at Thu 2024-09-12 21:41:26 UTC. --
	Sep 12 21:41:11 ubuntu-20-agent-2 kubelet[18128]: E0912 21:41:11.112658   18128 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c3abb5cd-b8a7-415c-a8a9-fc01a0108574"
	Sep 12 21:41:16 ubuntu-20-agent-2 kubelet[18128]: E0912 21:41:16.113525   18128 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="738f7f62-8492-4a95-96dd-f7d2f9045907"
	Sep 12 21:41:24 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:24.110922   18128 scope.go:117] "RemoveContainer" containerID="2979c96985c7257b13ad240fe40c2588c62581959f297a6e2be6b017205fe66c"
	Sep 12 21:41:24 ubuntu-20-agent-2 kubelet[18128]: E0912 21:41:24.111108   18128 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-vxmdx_gadget(456fd1a8-3c3a-41af-9d30-e0a5bb8276a4)\"" pod="gadget/gadget-vxmdx" podUID="456fd1a8-3c3a-41af-9d30-e0a5bb8276a4"
	Sep 12 21:41:25 ubuntu-20-agent-2 kubelet[18128]: E0912 21:41:25.112539   18128 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c3abb5cd-b8a7-415c-a8a9-fc01a0108574"
	Sep 12 21:41:25 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:25.928072   18128 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/738f7f62-8492-4a95-96dd-f7d2f9045907-gcp-creds\") pod \"738f7f62-8492-4a95-96dd-f7d2f9045907\" (UID: \"738f7f62-8492-4a95-96dd-f7d2f9045907\") "
	Sep 12 21:41:25 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:25.928152   18128 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zs85\" (UniqueName: \"kubernetes.io/projected/738f7f62-8492-4a95-96dd-f7d2f9045907-kube-api-access-9zs85\") pod \"738f7f62-8492-4a95-96dd-f7d2f9045907\" (UID: \"738f7f62-8492-4a95-96dd-f7d2f9045907\") "
	Sep 12 21:41:25 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:25.928186   18128 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/738f7f62-8492-4a95-96dd-f7d2f9045907-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "738f7f62-8492-4a95-96dd-f7d2f9045907" (UID: "738f7f62-8492-4a95-96dd-f7d2f9045907"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 12 21:41:25 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:25.928307   18128 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/738f7f62-8492-4a95-96dd-f7d2f9045907-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 12 21:41:25 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:25.929995   18128 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/738f7f62-8492-4a95-96dd-f7d2f9045907-kube-api-access-9zs85" (OuterVolumeSpecName: "kube-api-access-9zs85") pod "738f7f62-8492-4a95-96dd-f7d2f9045907" (UID: "738f7f62-8492-4a95-96dd-f7d2f9045907"). InnerVolumeSpecName "kube-api-access-9zs85". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:41:26 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:26.029042   18128 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9zs85\" (UniqueName: \"kubernetes.io/projected/738f7f62-8492-4a95-96dd-f7d2f9045907-kube-api-access-9zs85\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 12 21:41:26 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:26.331287   18128 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md6rx\" (UniqueName: \"kubernetes.io/projected/d87e815f-a8f5-4d9c-921b-fdc6c76b6645-kube-api-access-md6rx\") pod \"d87e815f-a8f5-4d9c-921b-fdc6c76b6645\" (UID: \"d87e815f-a8f5-4d9c-921b-fdc6c76b6645\") "
	Sep 12 21:41:26 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:26.333679   18128 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87e815f-a8f5-4d9c-921b-fdc6c76b6645-kube-api-access-md6rx" (OuterVolumeSpecName: "kube-api-access-md6rx") pod "d87e815f-a8f5-4d9c-921b-fdc6c76b6645" (UID: "d87e815f-a8f5-4d9c-921b-fdc6c76b6645"). InnerVolumeSpecName "kube-api-access-md6rx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:41:26 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:26.432798   18128 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn487\" (UniqueName: \"kubernetes.io/projected/7bb49b8b-92a4-44db-b384-d1ec8357b811-kube-api-access-rn487\") pod \"7bb49b8b-92a4-44db-b384-d1ec8357b811\" (UID: \"7bb49b8b-92a4-44db-b384-d1ec8357b811\") "
	Sep 12 21:41:26 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:26.432921   18128 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-md6rx\" (UniqueName: \"kubernetes.io/projected/d87e815f-a8f5-4d9c-921b-fdc6c76b6645-kube-api-access-md6rx\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 12 21:41:26 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:26.434914   18128 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb49b8b-92a4-44db-b384-d1ec8357b811-kube-api-access-rn487" (OuterVolumeSpecName: "kube-api-access-rn487") pod "7bb49b8b-92a4-44db-b384-d1ec8357b811" (UID: "7bb49b8b-92a4-44db-b384-d1ec8357b811"). InnerVolumeSpecName "kube-api-access-rn487". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:41:26 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:26.533694   18128 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rn487\" (UniqueName: \"kubernetes.io/projected/7bb49b8b-92a4-44db-b384-d1ec8357b811-kube-api-access-rn487\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 12 21:41:26 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:26.701462   18128 scope.go:117] "RemoveContainer" containerID="cc13bc76cc8f4dc41fd00e9db486b7d300e867a9863ae9a0acbea858c47801d1"
	Sep 12 21:41:26 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:26.720559   18128 scope.go:117] "RemoveContainer" containerID="cc13bc76cc8f4dc41fd00e9db486b7d300e867a9863ae9a0acbea858c47801d1"
	Sep 12 21:41:26 ubuntu-20-agent-2 kubelet[18128]: E0912 21:41:26.721433   18128 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: cc13bc76cc8f4dc41fd00e9db486b7d300e867a9863ae9a0acbea858c47801d1" containerID="cc13bc76cc8f4dc41fd00e9db486b7d300e867a9863ae9a0acbea858c47801d1"
	Sep 12 21:41:26 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:26.721477   18128 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"cc13bc76cc8f4dc41fd00e9db486b7d300e867a9863ae9a0acbea858c47801d1"} err="failed to get container status \"cc13bc76cc8f4dc41fd00e9db486b7d300e867a9863ae9a0acbea858c47801d1\": rpc error: code = Unknown desc = Error response from daemon: No such container: cc13bc76cc8f4dc41fd00e9db486b7d300e867a9863ae9a0acbea858c47801d1"
	Sep 12 21:41:26 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:26.721506   18128 scope.go:117] "RemoveContainer" containerID="8691f6fa365cebfc79b93e2c681b9561bfe2a6f01a5f5d3084a3ea56f900c349"
	Sep 12 21:41:26 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:26.739250   18128 scope.go:117] "RemoveContainer" containerID="8691f6fa365cebfc79b93e2c681b9561bfe2a6f01a5f5d3084a3ea56f900c349"
	Sep 12 21:41:26 ubuntu-20-agent-2 kubelet[18128]: E0912 21:41:26.740032   18128 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8691f6fa365cebfc79b93e2c681b9561bfe2a6f01a5f5d3084a3ea56f900c349" containerID="8691f6fa365cebfc79b93e2c681b9561bfe2a6f01a5f5d3084a3ea56f900c349"
	Sep 12 21:41:26 ubuntu-20-agent-2 kubelet[18128]: I0912 21:41:26.740079   18128 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8691f6fa365cebfc79b93e2c681b9561bfe2a6f01a5f5d3084a3ea56f900c349"} err="failed to get container status \"8691f6fa365cebfc79b93e2c681b9561bfe2a6f01a5f5d3084a3ea56f900c349\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8691f6fa365cebfc79b93e2c681b9561bfe2a6f01a5f5d3084a3ea56f900c349"
	
	
	==> storage-provisioner [fd7e1ca4c296] <==
	I0912 21:30:13.925198       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:30:13.933629       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:30:13.933675       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:30:13.942156       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:30:13.942303       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_773a1a35-3136-4716-a4f7-72573d19443e!
	I0912 21:30:13.944279       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6f550c12-10ab-4bc4-b6a1-ceaa54f2a90a", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_773a1a35-3136-4716-a4f7-72573d19443e became leader
	I0912 21:30:14.042569       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_773a1a35-3136-4716-a4f7-72573d19443e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-2/10.138.0.48
	Start Time:       Thu, 12 Sep 2024 21:32:13 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qxnbt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qxnbt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-2
	  Normal   Pulling    7m45s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m45s (x4 over 9m13s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m45s (x4 over 9m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m33s (x6 over 9m13s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m8s (x21 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.83s)

                                                
                                    

Test pass (111/168)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 1.26
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.1
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 1
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.05
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.54
22 TestOffline 68.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
27 TestAddons/Setup 102.37
29 TestAddons/serial/Volcano 37.37
31 TestAddons/serial/GCPAuth/Namespaces 0.11
35 TestAddons/parallel/InspektorGadget 10.44
36 TestAddons/parallel/MetricsServer 5.37
37 TestAddons/parallel/HelmTiller 9.29
39 TestAddons/parallel/CSI 38.21
40 TestAddons/parallel/Headlamp 15.83
41 TestAddons/parallel/CloudSpanner 6.26
43 TestAddons/parallel/NvidiaDevicePlugin 5.22
44 TestAddons/parallel/Yakd 10.39
45 TestAddons/StoppedEnableDisable 10.65
47 TestCertExpiration 226.26
58 TestFunctional/serial/CopySyncFile 0
59 TestFunctional/serial/StartWithProxy 26.46
60 TestFunctional/serial/AuditLog 0
61 TestFunctional/serial/SoftStart 30.59
62 TestFunctional/serial/KubeContext 0.04
63 TestFunctional/serial/KubectlGetPods 0.06
65 TestFunctional/serial/MinikubeKubectlCmd 0.1
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
67 TestFunctional/serial/ExtraConfig 37.26
68 TestFunctional/serial/ComponentHealth 0.06
69 TestFunctional/serial/LogsCmd 0.76
70 TestFunctional/serial/LogsFileCmd 0.8
71 TestFunctional/serial/InvalidService 4.07
73 TestFunctional/parallel/ConfigCmd 0.25
74 TestFunctional/parallel/DashboardCmd 8.7
75 TestFunctional/parallel/DryRun 0.15
76 TestFunctional/parallel/InternationalLanguage 0.07
77 TestFunctional/parallel/StatusCmd 0.39
80 TestFunctional/parallel/ProfileCmd/profile_not_create 0.22
81 TestFunctional/parallel/ProfileCmd/profile_list 0.21
82 TestFunctional/parallel/ProfileCmd/profile_json_output 0.21
84 TestFunctional/parallel/ServiceCmd/DeployApp 9.14
85 TestFunctional/parallel/ServiceCmd/List 0.32
86 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
87 TestFunctional/parallel/ServiceCmd/HTTPS 0.15
88 TestFunctional/parallel/ServiceCmd/Format 0.14
89 TestFunctional/parallel/ServiceCmd/URL 0.14
90 TestFunctional/parallel/ServiceCmdConnect 7.28
91 TestFunctional/parallel/AddonsCmd 0.11
92 TestFunctional/parallel/PersistentVolumeClaim 22.81
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.26
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.18
99 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
100 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
104 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
107 TestFunctional/parallel/MySQL 20.61
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 14.79
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 14.72
116 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/Version/short 0.05
121 TestFunctional/parallel/Version/components 0.37
122 TestFunctional/parallel/License 0.2
123 TestFunctional/delete_echo-server_images 0.03
124 TestFunctional/delete_my-image_image 0.02
125 TestFunctional/delete_minikube_cached_images 0.01
130 TestImageBuild/serial/Setup 14.18
131 TestImageBuild/serial/NormalBuild 1.5
132 TestImageBuild/serial/BuildWithBuildArg 0.8
133 TestImageBuild/serial/BuildWithDockerIgnore 0.58
134 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.56
138 TestJSONOutput/start/Command 29.78
139 TestJSONOutput/start/Audit 0
141 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
142 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
144 TestJSONOutput/pause/Command 0.53
145 TestJSONOutput/pause/Audit 0
147 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/unpause/Command 0.39
151 TestJSONOutput/unpause/Audit 0
153 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/stop/Command 5.29
157 TestJSONOutput/stop/Audit 0
159 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
161 TestErrorJSONOutput 0.18
166 TestMainNoArgs 0.04
167 TestMinikubeProfile 33.01
175 TestPause/serial/Start 24.7
176 TestPause/serial/SecondStartNoReconfiguration 29.89
177 TestPause/serial/Pause 0.46
178 TestPause/serial/VerifyStatus 0.13
179 TestPause/serial/Unpause 0.38
180 TestPause/serial/PauseAgain 0.53
181 TestPause/serial/DeletePaused 1.61
182 TestPause/serial/VerifyDeletedResources 0.06
196 TestRunningBinaryUpgrade 69.94
198 TestStoppedBinaryUpgrade/Setup 0.47
199 TestStoppedBinaryUpgrade/Upgrade 49.72
200 TestStoppedBinaryUpgrade/MinikubeLogs 0.76
201 TestKubernetesUpgrade 309
x
+
TestDownloadOnly/v1.20.0/json-events (1.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.259694586s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (1.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (51.996449ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:28:41
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:28:41.345797   12643 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:28:41.345885   12643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:28:41.345893   12643 out.go:358] Setting ErrFile to fd 2...
	I0912 21:28:41.345897   12643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:28:41.346075   12643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5765/.minikube/bin
	W0912 21:28:41.346196   12643 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19616-5765/.minikube/config/config.json: open /home/jenkins/minikube-integration/19616-5765/.minikube/config/config.json: no such file or directory
	I0912 21:28:41.346755   12643 out.go:352] Setting JSON to true
	I0912 21:28:41.347638   12643 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":672,"bootTime":1726175849,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:28:41.347687   12643 start.go:139] virtualization: kvm guest
	I0912 21:28:41.349973   12643 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0912 21:28:41.350060   12643 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19616-5765/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 21:28:41.350097   12643 notify.go:220] Checking for updates...
	I0912 21:28:41.351443   12643 out.go:169] MINIKUBE_LOCATION=19616
	I0912 21:28:41.352875   12643 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:28:41.354259   12643 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19616-5765/kubeconfig
	I0912 21:28:41.355580   12643 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5765/.minikube
	I0912 21:28:41.356706   12643 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
--- PASS: TestDownloadOnly/v1.31.1/json-events (1.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (53.359117ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC | 12 Sep 24 21:28 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC | 12 Sep 24 21:28 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 12 Sep 24 21:28 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:28:42
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:28:42.882245   12794 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:28:42.882377   12794 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:28:42.882387   12794 out.go:358] Setting ErrFile to fd 2...
	I0912 21:28:42.882392   12794 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:28:42.882566   12794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5765/.minikube/bin
	I0912 21:28:42.883098   12794 out.go:352] Setting JSON to true
	I0912 21:28:42.884115   12794 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":674,"bootTime":1726175849,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:28:42.884174   12794 start.go:139] virtualization: kvm guest
	I0912 21:28:42.886338   12794 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0912 21:28:42.886454   12794 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19616-5765/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 21:28:42.886506   12794 notify.go:220] Checking for updates...
	I0912 21:28:42.887976   12794 out.go:169] MINIKUBE_LOCATION=19616
	I0912 21:28:42.889452   12794 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:28:42.890832   12794 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19616-5765/kubeconfig
	I0912 21:28:42.892042   12794 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5765/.minikube
	I0912 21:28:42.893089   12794 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:34985 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (68.56s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (1m6.947656029s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.611390632s)
--- PASS: TestOffline (68.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (44.315419ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (44.098002ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (102.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller: (1m42.370477211s)
--- PASS: TestAddons/Setup (102.37s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.37s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 7.780449ms
addons_test.go:905: volcano-admission stabilized in 8.061226ms
addons_test.go:897: volcano-scheduler stabilized in 8.220922ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-psl6w" [ae7fdddb-f0bd-4a25-9314-3f6af8d7af2e] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003517069s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-jp4nw" [32591d03-c147-4447-9159-acae3f44e58b] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00372047s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-dscps" [5e0590a2-58b3-4260-add4-96a9de7b6a49] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003166049s
addons_test.go:932: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ebaa6905-2a1d-43f6-9ddb-b994f3daedfe] Pending
helpers_test.go:344: "test-job-nginx-0" [ebaa6905-2a1d-43f6-9ddb-b994f3daedfe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [ebaa6905-2a1d-43f6-9ddb-b994f3daedfe] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003803481s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.055909264s)
--- PASS: TestAddons/serial/Volcano (37.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.44s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vxmdx" [456fd1a8-3c3a-41af-9d30-e0a5bb8276a4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004155495s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.438780078s)
--- PASS: TestAddons/parallel/InspektorGadget (10.44s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.059192ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-5lgxq" [1f086edc-d734-40a7-8336-4b99ca2ef54e] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003769892s
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.37s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.29s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.900188ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-q5z94" [9be4f6aa-1461-4e08-87ef-af8cdb39e9d3] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003514964s
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.019797988s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.156956ms
addons_test.go:570: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [09ba753c-5ba8-4d35-8b6e-030647feb977] Pending
helpers_test.go:344: "task-pv-pod" [09ba753c-5ba8-4d35-8b6e-030647feb977] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [09ba753c-5ba8-4d35-8b6e-030647feb977] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003743908s
addons_test.go:590: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [65574a22-a546-410e-94d5-4a2c0420cac2] Pending
helpers_test.go:344: "task-pv-pod-restore" [65574a22-a546-410e-94d5-4a2c0420cac2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [65574a22-a546-410e-94d5-4a2c0420cac2] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003190429s
addons_test.go:632: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.294298489s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (38.21s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-l777d" [036699ba-cd19-4301-a427-b632937de866] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-l777d" [036699ba-cd19-4301-a427-b632937de866] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003297481s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.382652435s)
--- PASS: TestAddons/parallel/Headlamp (15.83s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-78dsw" [beb85c4f-adb3-4bf6-81d4-570fedb167b6] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.002869689s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (6.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.22s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-x2pzq" [bc25e507-54d5-4d43-81b2-c203b640ba13] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003511695s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.22s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-dbbx5" [9b15af08-f2bb-4375-85a0-325d0cc081c6] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00359057s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.381117452s)
--- PASS: TestAddons/parallel/Yakd (10.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.65s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.365873814s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.65s)

                                                
                                    
x
+
TestCertExpiration (226.26s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (13.633005666s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (31.079727348s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.542702808s)
--- PASS: TestCertExpiration (226.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19616-5765/.minikube/files/etc/test/nested/copy/12632/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (26.46s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (26.45875051s)
--- PASS: TestFunctional/serial/StartWithProxy (26.46s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (30.592844163s)
functional_test.go:663: soft start took 30.593445229s for "minikube" cluster.
--- PASS: TestFunctional/serial/SoftStart (30.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.254704764s)
functional_test.go:761: restart took 37.254825216s for "minikube" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.76s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd1071072174/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.80s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (150.565151ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://10.138.0.48:30222 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (39.682462ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (39.286811ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/12 21:48:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 48563: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.70s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (74.671999ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 21:48:55.722204   48947 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:48:55.722716   48947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:48:55.722735   48947 out.go:358] Setting ErrFile to fd 2...
	I0912 21:48:55.722742   48947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:48:55.723264   48947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5765/.minikube/bin
	I0912 21:48:55.723992   48947 out.go:352] Setting JSON to false
	I0912 21:48:55.724937   48947 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1887,"bootTime":1726175849,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:48:55.724998   48947 start.go:139] virtualization: kvm guest
	I0912 21:48:55.727066   48947 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0912 21:48:55.728751   48947 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19616-5765/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 21:48:55.728782   48947 notify.go:220] Checking for updates...
	I0912 21:48:55.728803   48947 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 21:48:55.730186   48947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:48:55.731433   48947 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5765/kubeconfig
	I0912 21:48:55.732655   48947 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5765/.minikube
	I0912 21:48:55.733671   48947 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:48:55.734801   48947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:48:55.736405   48947 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:48:55.736693   48947 exec_runner.go:51] Run: systemctl --version
	I0912 21:48:55.739173   48947 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:48:55.750754   48947 out.go:177] * Using the none driver based on existing profile
	I0912 21:48:55.751947   48947 start.go:297] selected driver: none
	I0912 21:48:55.751966   48947 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:48:55.752072   48947 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:48:55.752098   48947 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0912 21:48:55.752408   48947 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0912 21:48:55.754570   48947 out.go:201] 
	W0912 21:48:55.755704   48947 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0912 21:48:55.756818   48947 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (74.672361ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 21:48:55.875823   48977 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:48:55.876109   48977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:48:55.876119   48977 out.go:358] Setting ErrFile to fd 2...
	I0912 21:48:55.876126   48977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:48:55.876374   48977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-5765/.minikube/bin
	I0912 21:48:55.876886   48977 out.go:352] Setting JSON to false
	I0912 21:48:55.877851   48977 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1887,"bootTime":1726175849,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:48:55.877908   48977 start.go:139] virtualization: kvm guest
	I0912 21:48:55.879912   48977 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0912 21:48:55.881156   48977 out.go:177]   - MINIKUBE_LOCATION=19616
	W0912 21:48:55.881181   48977 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19616-5765/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 21:48:55.881230   48977 notify.go:220] Checking for updates...
	I0912 21:48:55.883309   48977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:48:55.884479   48977 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-5765/kubeconfig
	I0912 21:48:55.885608   48977 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5765/.minikube
	I0912 21:48:55.886644   48977 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:48:55.887710   48977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:48:55.889425   48977 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:48:55.889744   48977 exec_runner.go:51] Run: systemctl --version
	I0912 21:48:55.892154   48977 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:48:55.901041   48977 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0912 21:48:55.902064   48977 start.go:297] selected driver: none
	I0912 21:48:55.902082   48977 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:48:55.902197   48977 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:48:55.902222   48977 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0912 21:48:55.902583   48977 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0912 21:48:55.904393   48977 out.go:201] 
	W0912 21:48:55.905426   48977 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0912 21:48:55.906432   48977 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "168.539138ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.027379ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "167.650137ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "43.069693ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-g2kw2" [2d321db8-614b-4a8a-9ec1-632604dc90b2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-g2kw2" [2d321db8-614b-4a8a-9ec1-632604dc90b2] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003378627s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "330.365278ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.138.0.48:32317
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.138.0.48:32317
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-qdjjw" [202fcfab-0840-46fc-a2f5-c15ce45ab2c2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-qdjjw" [202fcfab-0840-46fc-a2f5-c15ce45ab2c2] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.002771824s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.138.0.48:32723
functional_test.go:1675: http://10.138.0.48:32723: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-qdjjw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.138.0.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.138.0.48:32723
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.28s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d91a9a0b-235c-4736-a715-670dcd6163d6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003295174s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6aab889e-9a7f-472c-aac1-559837d659aa] Pending
helpers_test.go:344: "sp-pod" [6aab889e-9a7f-472c-aac1-559837d659aa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6aab889e-9a7f-472c-aac1-559837d659aa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003654344s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml: (1.123486342s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [93040cbf-b400-43c9-ac48-8aa202b098f1] Pending
helpers_test.go:344: "sp-pod" [93040cbf-b400-43c9-ac48-8aa202b098f1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [93040cbf-b400-43c9-ac48-8aa202b098f1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003745231s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.81s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 50680: operation not permitted
helpers_test.go:508: unable to kill pid 50629: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context minikube apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [655685d6-0dd5-4396-9079-39ef0e66779f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [655685d6-0dd5-4396-9079-39ef0e66779f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003415388s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context minikube get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.152.103 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-c6tcq" [7211d869-fe90-4d90-813e-10d63fff53eb] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-c6tcq" [7211d869-fe90-4d90-813e-10d63fff53eb] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003355012s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-c6tcq -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-c6tcq -- mysql -ppassword -e "show databases;": exit status 1 (121.325778ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-c6tcq -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-c6tcq -- mysql -ppassword -e "show databases;": exit status 1 (105.281897ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-c6tcq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.78769719s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (14.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.724323714s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (14.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.177990212s)
--- PASS: TestImageBuild/serial/Setup (14.18s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.498060395s)
--- PASS: TestImageBuild/serial/NormalBuild (1.50s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.58s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.58s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (29.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (29.781512615s)
--- PASS: TestJSONOutput/start/Command (29.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.39s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.39s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.29s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (5.285431094s)
--- PASS: TestJSONOutput/stop/Command (5.29s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.608281ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"55cbe7ed-2fec-4fa2-868b-b035a82e8499","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1827320d-99bc-413b-a8a6-20df9636f408","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19616"}}
	{"specversion":"1.0","id":"809f3a55-4baf-4d15-8d89-1cd5e203995f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a04d9ce0-0948-4ad9-994f-998a71a880a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19616-5765/kubeconfig"}}
	{"specversion":"1.0","id":"2b0c101c-7b74-4a23-9f28-f78991ef5214","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5765/.minikube"}}
	{"specversion":"1.0","id":"21870aca-a7a6-4381-a0fe-2245913a68b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"189afed3-c971-4c44-81c3-dab3972d87c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e3fe671e-372d-4b98-8e95-f77084c45647","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (33.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.657033296s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.46497547s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.290467214s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (33.01s)

                                                
                                    
x
+
TestPause/serial/Start (24.7s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (24.700113763s)
--- PASS: TestPause/serial/Start (24.70s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.89s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (29.887962648s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.89s)

                                                
                                    
x
+
TestPause/serial/Pause (0.46s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.46s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (127.959264ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.38s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.38s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.53s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.53s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.61s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.608878198s)
--- PASS: TestPause/serial/DeletePaused (1.61s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2535366996 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2535366996 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (29.853662899s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (36.445460009s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.111962485s)
--- PASS: TestRunningBinaryUpgrade (69.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (49.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2798065485 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2798065485 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.655611203s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2798065485 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2798065485 -p minikube stop: (23.659546109s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (11.400126731s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (49.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (309s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (33.244235334s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (69.418331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m15.361209218s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (66.006973ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-5765/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-5765/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.135678978s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.261033624s)
--- PASS: TestKubernetesUpgrade (309.00s)

                                                
                                    

Test skip (56/168)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
38 TestAddons/parallel/Olm 0
42 TestAddons/parallel/LocalPath 0
46 TestCertOptions 0
48 TestDockerFlags 0
49 TestForceSystemdFlag 0
50 TestForceSystemdEnv 0
51 TestDockerEnvContainerd 0
52 TestKVMDriverInstallOrUpdate 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
55 TestErrorSpam 0
64 TestFunctional/serial/CacheCmd 0
78 TestFunctional/parallel/MountCmd 0
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
102 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
103 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
105 TestFunctional/parallel/SSHCmd 0
106 TestFunctional/parallel/CpCmd 0
108 TestFunctional/parallel/FileSync 0
109 TestFunctional/parallel/CertSync 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/ImageCommands 0
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0
126 TestGvisorAddon 0
127 TestMultiControlPlane 0
135 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
162 TestKicCustomNetwork 0
163 TestKicExistingNetwork 0
164 TestKicCustomSubnet 0
165 TestKicStaticIP 0
168 TestMountStart 0
169 TestMultiNode 0
170 TestNetworkPlugins 0
171 TestNoKubernetes 0
172 TestChangeNoneUser 0
183 TestPreload 0
184 TestScheduledStopWindows 0
185 TestScheduledStopUnix 0
186 TestSkaffold 0
189 TestStartStop/group/old-k8s-version 0.13
190 TestStartStop/group/newest-cni 0.13
191 TestStartStop/group/default-k8s-diff-port 0.13
192 TestStartStop/group/no-preload 0.12
193 TestStartStop/group/disable-driver-mounts 0.13
194 TestStartStop/group/embed-certs 0.12
195 TestInsufficientStorage 0
202 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:198: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:978: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.12s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.12s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard