Test Report: none_Linux 19598

                    
                      cb70ad94d69a229bf8d3511a5a00af396fa2386e:2024-09-10:36157
                    
                

Test fail (1/168)

Order failed test Duration
33 TestAddons/parallel/Registry 71.73
x
+
TestAddons/parallel/Registry (71.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.598908ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-g4sqx" [41df7a66-1627-4588-93cb-12aa6056b911] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004027898s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-r8m4h" [9943c8e8-b9ee-43fa-a4eb-dd49156651ce] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002917465s
addons_test.go:342: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.074619904s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/10 17:41:25 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:33859               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:31 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 10 Sep 24 17:32 UTC | 10 Sep 24 17:32 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 10 Sep 24 17:41 UTC | 10 Sep 24 17:41 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 10 Sep 24 17:41 UTC | 10 Sep 24 17:41 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:29:51
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:29:51.456371   16364 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:29:51.456609   16364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:51.456618   16364 out.go:358] Setting ErrFile to fd 2...
	I0910 17:29:51.456622   16364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:51.456811   16364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5822/.minikube/bin
	I0910 17:29:51.457418   16364 out.go:352] Setting JSON to false
	I0910 17:29:51.458232   16364 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":740,"bootTime":1725988651,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:29:51.458305   16364 start.go:139] virtualization: kvm guest
	I0910 17:29:51.460426   16364 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0910 17:29:51.461535   16364 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19598-5822/.minikube/cache/preloaded-tarball: no such file or directory
	I0910 17:29:51.461546   16364 notify.go:220] Checking for updates...
	I0910 17:29:51.461584   16364 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:29:51.462911   16364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:29:51.464124   16364 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5822/kubeconfig
	I0910 17:29:51.465500   16364 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5822/.minikube
	I0910 17:29:51.466817   16364 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 17:29:51.468114   16364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:29:51.469594   16364 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:29:51.478820   16364 out.go:177] * Using the none driver based on user configuration
	I0910 17:29:51.479966   16364 start.go:297] selected driver: none
	I0910 17:29:51.479977   16364 start.go:901] validating driver "none" against <nil>
	I0910 17:29:51.479987   16364 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:29:51.480012   16364 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0910 17:29:51.480310   16364 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0910 17:29:51.480921   16364 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0910 17:29:51.481114   16364 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:29:51.481141   16364 cni.go:84] Creating CNI manager for ""
	I0910 17:29:51.481152   16364 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 17:29:51.481161   16364 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 17:29:51.481198   16364 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:29:51.482506   16364 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0910 17:29:51.483847   16364 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/config.json ...
	I0910 17:29:51.483872   16364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/config.json: {Name:mkba33f48f98d682ee826671cec5eb5b450c4469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:51.483988   16364 start.go:360] acquireMachinesLock for minikube: {Name:mk7262ba6282b11286f2bc50f36dd947c77a5f7d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 17:29:51.484018   16364 start.go:364] duration metric: took 19.037µs to acquireMachinesLock for "minikube"
	I0910 17:29:51.484030   16364 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 17:29:51.484091   16364 start.go:125] createHost starting for "" (driver="none")
	I0910 17:29:51.485383   16364 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0910 17:29:51.486501   16364 exec_runner.go:51] Run: systemctl --version
	I0910 17:29:51.488799   16364 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0910 17:29:51.488832   16364 client.go:168] LocalClient.Create starting
	I0910 17:29:51.488916   16364 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5822/.minikube/certs/ca.pem
	I0910 17:29:51.488944   16364 main.go:141] libmachine: Decoding PEM data...
	I0910 17:29:51.488957   16364 main.go:141] libmachine: Parsing certificate...
	I0910 17:29:51.489003   16364 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19598-5822/.minikube/certs/cert.pem
	I0910 17:29:51.489022   16364 main.go:141] libmachine: Decoding PEM data...
	I0910 17:29:51.489033   16364 main.go:141] libmachine: Parsing certificate...
	I0910 17:29:51.489301   16364 client.go:171] duration metric: took 449.301µs to LocalClient.Create
	I0910 17:29:51.489320   16364 start.go:167] duration metric: took 523.424µs to libmachine.API.Create "minikube"
	I0910 17:29:51.489326   16364 start.go:293] postStartSetup for "minikube" (driver="none")
	I0910 17:29:51.489363   16364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 17:29:51.489400   16364 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 17:29:51.498021   16364 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0910 17:29:51.498047   16364 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0910 17:29:51.498057   16364 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0910 17:29:51.500014   16364 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0910 17:29:51.501314   16364 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5822/.minikube/addons for local assets ...
	I0910 17:29:51.501375   16364 filesync.go:126] Scanning /home/jenkins/minikube-integration/19598-5822/.minikube/files for local assets ...
	I0910 17:29:51.501398   16364 start.go:296] duration metric: took 12.067529ms for postStartSetup
	I0910 17:29:51.501913   16364 profile.go:143] Saving config to /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/config.json ...
	I0910 17:29:51.502028   16364 start.go:128] duration metric: took 17.93017ms to createHost
	I0910 17:29:51.502039   16364 start.go:83] releasing machines lock for "minikube", held for 18.012912ms
	I0910 17:29:51.502333   16364 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0910 17:29:51.502435   16364 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0910 17:29:51.504210   16364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 17:29:51.504274   16364 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 17:29:51.513413   16364 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0910 17:29:51.513432   16364 start.go:495] detecting cgroup driver to use...
	I0910 17:29:51.513454   16364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0910 17:29:51.513541   16364 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:29:51.530297   16364 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0910 17:29:51.540083   16364 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 17:29:51.547948   16364 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 17:29:51.548001   16364 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 17:29:51.557516   16364 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 17:29:51.566169   16364 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 17:29:51.574980   16364 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 17:29:51.583048   16364 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 17:29:51.590188   16364 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 17:29:51.598180   16364 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0910 17:29:51.607516   16364 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0910 17:29:51.616455   16364 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 17:29:51.623693   16364 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 17:29:51.630060   16364 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0910 17:29:51.834383   16364 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0910 17:29:51.899427   16364 start.go:495] detecting cgroup driver to use...
	I0910 17:29:51.899468   16364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0910 17:29:51.899555   16364 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 17:29:51.920272   16364 exec_runner.go:51] Run: which cri-dockerd
	I0910 17:29:51.921139   16364 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 17:29:51.928887   16364 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0910 17:29:51.928905   16364 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0910 17:29:51.928938   16364 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0910 17:29:51.936134   16364 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0910 17:29:51.936317   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1669526821 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0910 17:29:51.944420   16364 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0910 17:29:52.163203   16364 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0910 17:29:52.377321   16364 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 17:29:52.377469   16364 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0910 17:29:52.377486   16364 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0910 17:29:52.377533   16364 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0910 17:29:52.385916   16364 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0910 17:29:52.386059   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4140262185 /etc/docker/daemon.json
	I0910 17:29:52.394564   16364 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0910 17:29:52.612669   16364 exec_runner.go:51] Run: sudo systemctl restart docker
	I0910 17:29:52.898358   16364 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0910 17:29:52.909193   16364 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0910 17:29:52.924020   16364 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 17:29:52.934028   16364 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0910 17:29:53.134389   16364 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0910 17:29:53.347103   16364 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0910 17:29:53.547542   16364 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0910 17:29:53.561626   16364 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0910 17:29:53.571878   16364 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0910 17:29:53.777463   16364 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0910 17:29:53.845930   16364 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 17:29:53.845995   16364 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0910 17:29:53.847638   16364 start.go:563] Will wait 60s for crictl version
	I0910 17:29:53.847680   16364 exec_runner.go:51] Run: which crictl
	I0910 17:29:53.848488   16364 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0910 17:29:53.876656   16364 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0910 17:29:53.876713   16364 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0910 17:29:53.897243   16364 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0910 17:29:53.923802   16364 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.1 ...
	I0910 17:29:53.923874   16364 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0910 17:29:53.926528   16364 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0910 17:29:53.927788   16364 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0910 17:29:53.927891   16364 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0910 17:29:53.927905   16364 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.0 docker true true} ...
	I0910 17:29:53.927987   16364 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0910 17:29:53.928028   16364 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0910 17:29:53.977938   16364 cni.go:84] Creating CNI manager for ""
	I0910 17:29:53.977970   16364 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 17:29:53.977982   16364 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0910 17:29:53.978008   16364 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 17:29:53.978211   16364 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 17:29:53.978279   16364 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0910 17:29:53.987099   16364 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0910 17:29:53.987160   16364 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0910 17:29:53.995453   16364 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0910 17:29:53.995453   16364 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0910 17:29:53.995510   16364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5822/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0910 17:29:53.995522   16364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5822/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0910 17:29:53.995453   16364 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0910 17:29:53.995609   16364 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:29:54.006613   16364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5822/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0910 17:29:54.041545   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3096782151 /var/lib/minikube/binaries/v1.31.0/kubectl
	I0910 17:29:54.051326   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3485134113 /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0910 17:29:54.070401   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2942754041 /var/lib/minikube/binaries/v1.31.0/kubelet
	I0910 17:29:54.135317   16364 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 17:29:54.143495   16364 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0910 17:29:54.143511   16364 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0910 17:29:54.143542   16364 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0910 17:29:54.150495   16364 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0910 17:29:54.150609   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube834936767 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0910 17:29:54.158828   16364 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0910 17:29:54.158845   16364 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0910 17:29:54.158872   16364 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0910 17:29:54.165643   16364 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 17:29:54.165757   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube76267725 /lib/systemd/system/kubelet.service
	I0910 17:29:54.172643   16364 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0910 17:29:54.172742   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3470866814 /var/tmp/minikube/kubeadm.yaml.new
	I0910 17:29:54.185385   16364 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0910 17:29:54.186519   16364 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0910 17:29:54.397831   16364 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0910 17:29:54.411095   16364 certs.go:68] Setting up /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube for IP: 10.138.0.48
	I0910 17:29:54.411123   16364 certs.go:194] generating shared ca certs ...
	I0910 17:29:54.411143   16364 certs.go:226] acquiring lock for ca certs: {Name:mk3509a5a1f4a382b867320321a79ac03a027d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.411269   16364 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19598-5822/.minikube/ca.key
	I0910 17:29:54.411323   16364 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19598-5822/.minikube/proxy-client-ca.key
	I0910 17:29:54.411336   16364 certs.go:256] generating profile certs ...
	I0910 17:29:54.411432   16364 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/client.key
	I0910 17:29:54.411455   16364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/client.crt with IP's: []
	I0910 17:29:54.592581   16364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/client.crt ...
	I0910 17:29:54.592608   16364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/client.crt: {Name:mka0b986c43214587c130ba6187471aebf28bf98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.592801   16364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/client.key ...
	I0910 17:29:54.592812   16364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/client.key: {Name:mkc0d012541e0b60d96c274bfbf6dad777169b38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.592873   16364 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0910 17:29:54.592887   16364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0910 17:29:54.684540   16364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0910 17:29:54.684572   16364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk3c2263edc0acded5c948efc5d0abb07f9c9b6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.684719   16364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0910 17:29:54.684731   16364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mkf06ad178e9380da3ac085e1c2565f768c3056a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.684792   16364 certs.go:381] copying /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/apiserver.crt
	I0910 17:29:54.684868   16364 certs.go:385] copying /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/apiserver.key
	I0910 17:29:54.684931   16364 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/proxy-client.key
	I0910 17:29:54.684949   16364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0910 17:29:54.756299   16364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/proxy-client.crt ...
	I0910 17:29:54.756329   16364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/proxy-client.crt: {Name:mk292ba86337c7dc640d14327371368244bd284e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.756452   16364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/proxy-client.key ...
	I0910 17:29:54.756461   16364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/proxy-client.key: {Name:mk1f7bb53aa34fcb426557329a13eace1463450d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:29:54.756671   16364 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5822/.minikube/certs/ca-key.pem (1675 bytes)
	I0910 17:29:54.756702   16364 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5822/.minikube/certs/ca.pem (1078 bytes)
	I0910 17:29:54.756725   16364 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5822/.minikube/certs/cert.pem (1123 bytes)
	I0910 17:29:54.756748   16364 certs.go:484] found cert: /home/jenkins/minikube-integration/19598-5822/.minikube/certs/key.pem (1679 bytes)
	I0910 17:29:54.757310   16364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5822/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 17:29:54.757456   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1985518523 /var/lib/minikube/certs/ca.crt
	I0910 17:29:54.765782   16364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5822/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 17:29:54.765888   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube344238289 /var/lib/minikube/certs/ca.key
	I0910 17:29:54.773647   16364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5822/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 17:29:54.773756   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1625839644 /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 17:29:54.780744   16364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5822/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0910 17:29:54.780842   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3963671734 /var/lib/minikube/certs/proxy-client-ca.key
	I0910 17:29:54.787934   16364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0910 17:29:54.788028   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube303056167 /var/lib/minikube/certs/apiserver.crt
	I0910 17:29:54.795106   16364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 17:29:54.795207   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3916022045 /var/lib/minikube/certs/apiserver.key
	I0910 17:29:54.802655   16364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 17:29:54.802760   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3747575905 /var/lib/minikube/certs/proxy-client.crt
	I0910 17:29:54.809969   16364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5822/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 17:29:54.810084   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3649925360 /var/lib/minikube/certs/proxy-client.key
	I0910 17:29:54.817184   16364 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0910 17:29:54.817200   16364 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:29:54.817227   16364 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:29:54.823811   16364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19598-5822/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 17:29:54.823926   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1093942545 /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:29:54.830897   16364 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 17:29:54.831010   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1401674700 /var/lib/minikube/kubeconfig
	I0910 17:29:54.838342   16364 exec_runner.go:51] Run: openssl version
	I0910 17:29:54.840975   16364 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 17:29:54.848741   16364 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:29:54.849967   16364 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 10 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:29:54.850012   16364 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 17:29:54.852883   16364 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 17:29:54.860365   16364 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0910 17:29:54.861408   16364 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0910 17:29:54.861446   16364 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:29:54.861548   16364 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 17:29:54.876647   16364 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 17:29:54.886010   16364 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 17:29:54.893238   16364 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0910 17:29:54.913329   16364 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 17:29:54.921335   16364 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 17:29:54.921351   16364 kubeadm.go:157] found existing configuration files:
	
	I0910 17:29:54.921396   16364 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0910 17:29:54.928883   16364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0910 17:29:54.928928   16364 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0910 17:29:54.935971   16364 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0910 17:29:54.943437   16364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0910 17:29:54.943484   16364 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0910 17:29:54.950760   16364 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0910 17:29:54.957803   16364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0910 17:29:54.957843   16364 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 17:29:54.964409   16364 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0910 17:29:54.971440   16364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0910 17:29:54.971482   16364 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 17:29:54.979179   16364 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 17:29:55.009751   16364 kubeadm.go:310] W0910 17:29:55.009631   17264 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:29:55.010254   16364 kubeadm.go:310] W0910 17:29:55.010196   17264 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0910 17:29:55.011911   16364 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0910 17:29:55.011957   16364 kubeadm.go:310] [preflight] Running pre-flight checks
	I0910 17:29:55.107055   16364 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 17:29:55.107085   16364 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 17:29:55.107094   16364 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0910 17:29:55.107111   16364 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 17:29:55.117909   16364 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 17:29:55.120566   16364 out.go:235]   - Generating certificates and keys ...
	I0910 17:29:55.120613   16364 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0910 17:29:55.120624   16364 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0910 17:29:55.267716   16364 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 17:29:55.597452   16364 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0910 17:29:55.834550   16364 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0910 17:29:55.983667   16364 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0910 17:29:56.074098   16364 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0910 17:29:56.074244   16364 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0910 17:29:56.324504   16364 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0910 17:29:56.324622   16364 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0910 17:29:56.507629   16364 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 17:29:56.655752   16364 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 17:29:56.860091   16364 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0910 17:29:56.860217   16364 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 17:29:56.964659   16364 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 17:29:57.250072   16364 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0910 17:29:57.332089   16364 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 17:29:57.381847   16364 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 17:29:57.523897   16364 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 17:29:57.524448   16364 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 17:29:57.526602   16364 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 17:29:57.528517   16364 out.go:235]   - Booting up control plane ...
	I0910 17:29:57.528542   16364 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 17:29:57.528562   16364 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 17:29:57.529149   16364 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 17:29:57.549369   16364 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 17:29:57.553569   16364 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 17:29:57.553586   16364 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0910 17:29:57.783232   16364 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0910 17:29:57.783259   16364 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0910 17:29:58.284762   16364 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.50167ms
	I0910 17:29:58.284787   16364 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0910 17:30:02.286607   16364 kubeadm.go:310] [api-check] The API server is healthy after 4.001825932s
	I0910 17:30:02.297249   16364 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 17:30:02.307133   16364 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 17:30:02.322304   16364 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 17:30:02.322320   16364 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 17:30:02.328826   16364 kubeadm.go:310] [bootstrap-token] Using token: pafn5v.4x251a97bkjb05qx
	I0910 17:30:02.329996   16364 out.go:235]   - Configuring RBAC rules ...
	I0910 17:30:02.330027   16364 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 17:30:02.332831   16364 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 17:30:02.337690   16364 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 17:30:02.339893   16364 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 17:30:02.341971   16364 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 17:30:02.344860   16364 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 17:30:02.692551   16364 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 17:30:03.110497   16364 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0910 17:30:03.691529   16364 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0910 17:30:03.692354   16364 kubeadm.go:310] 
	I0910 17:30:03.692368   16364 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0910 17:30:03.692372   16364 kubeadm.go:310] 
	I0910 17:30:03.692376   16364 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0910 17:30:03.692380   16364 kubeadm.go:310] 
	I0910 17:30:03.692383   16364 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0910 17:30:03.692387   16364 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 17:30:03.692398   16364 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 17:30:03.692402   16364 kubeadm.go:310] 
	I0910 17:30:03.692405   16364 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0910 17:30:03.692409   16364 kubeadm.go:310] 
	I0910 17:30:03.692414   16364 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 17:30:03.692418   16364 kubeadm.go:310] 
	I0910 17:30:03.692423   16364 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0910 17:30:03.692427   16364 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 17:30:03.692433   16364 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 17:30:03.692438   16364 kubeadm.go:310] 
	I0910 17:30:03.692443   16364 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 17:30:03.692453   16364 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0910 17:30:03.692458   16364 kubeadm.go:310] 
	I0910 17:30:03.692462   16364 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pafn5v.4x251a97bkjb05qx \
	I0910 17:30:03.692466   16364 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9970a3a439df8190e6de6e02ac6a427c13895088a8716be42269b88d55493f61 \
	I0910 17:30:03.692469   16364 kubeadm.go:310] 	--control-plane 
	I0910 17:30:03.692471   16364 kubeadm.go:310] 
	I0910 17:30:03.692474   16364 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0910 17:30:03.692476   16364 kubeadm.go:310] 
	I0910 17:30:03.692479   16364 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pafn5v.4x251a97bkjb05qx \
	I0910 17:30:03.692482   16364 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9970a3a439df8190e6de6e02ac6a427c13895088a8716be42269b88d55493f61 
	I0910 17:30:03.695539   16364 cni.go:84] Creating CNI manager for ""
	I0910 17:30:03.695562   16364 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 17:30:03.697766   16364 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 17:30:03.699484   16364 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0910 17:30:03.709635   16364 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0910 17:30:03.709752   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2715325492 /etc/cni/net.d/1-k8s.conflist
	I0910 17:30:03.720096   16364 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 17:30:03.720143   16364 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:03.720221   16364 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_10T17_30_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0910 17:30:03.729450   16364 ops.go:34] apiserver oom_adj: -16
	I0910 17:30:03.795008   16364 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:04.295557   16364 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:04.795220   16364 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:05.295482   16364 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:05.795749   16364 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:06.295844   16364 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:06.795194   16364 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:07.295932   16364 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:07.795880   16364 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:08.295397   16364 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:08.795273   16364 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 17:30:08.873323   16364 kubeadm.go:1113] duration metric: took 5.153230319s to wait for elevateKubeSystemPrivileges
	I0910 17:30:08.873358   16364 kubeadm.go:394] duration metric: took 14.011914317s to StartCluster
	I0910 17:30:08.873381   16364 settings.go:142] acquiring lock: {Name:mk03c1a7eb8f12119fba1de65ca4a12461bb0bb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:08.873444   16364 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19598-5822/kubeconfig
	I0910 17:30:08.874113   16364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19598-5822/kubeconfig: {Name:mka5e59e39fae2e74490b8fa0637c483e8dcdadd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 17:30:08.874378   16364 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 17:30:08.874467   16364 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0910 17:30:08.874559   16364 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0910 17:30:08.874575   16364 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0910 17:30:08.874580   16364 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:30:08.874594   16364 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0910 17:30:08.874596   16364 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0910 17:30:08.874620   16364 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0910 17:30:08.874634   16364 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0910 17:30:08.874636   16364 host.go:66] Checking if "minikube" exists ...
	I0910 17:30:08.874655   16364 host.go:66] Checking if "minikube" exists ...
	I0910 17:30:08.874709   16364 host.go:66] Checking if "minikube" exists ...
	I0910 17:30:08.875240   16364 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0910 17:30:08.875264   16364 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0910 17:30:08.875297   16364 host.go:66] Checking if "minikube" exists ...
	I0910 17:30:08.875318   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:08.875334   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:08.875393   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:08.875445   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:08.875463   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:08.875500   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:08.875543   16364 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0910 17:30:08.875598   16364 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0910 17:30:08.875626   16364 host.go:66] Checking if "minikube" exists ...
	I0910 17:30:08.875918   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:08.875928   16364 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0910 17:30:08.875941   16364 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0910 17:30:08.875952   16364 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0910 17:30:08.875962   16364 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0910 17:30:08.875978   16364 host.go:66] Checking if "minikube" exists ...
	I0910 17:30:08.876152   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:08.876164   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:08.876193   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:08.876260   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:08.876271   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:08.876297   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:08.876435   16364 addons.go:69] Setting volcano=true in profile "minikube"
	I0910 17:30:08.876451   16364 addons.go:69] Setting registry=true in profile "minikube"
	I0910 17:30:08.876463   16364 addons.go:234] Setting addon volcano=true in "minikube"
	I0910 17:30:08.876468   16364 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0910 17:30:08.876481   16364 addons.go:234] Setting addon registry=true in "minikube"
	I0910 17:30:08.876495   16364 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0910 17:30:08.876500   16364 host.go:66] Checking if "minikube" exists ...
	I0910 17:30:08.876513   16364 host.go:66] Checking if "minikube" exists ...
	I0910 17:30:08.876517   16364 host.go:66] Checking if "minikube" exists ...
	I0910 17:30:08.876721   16364 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0910 17:30:08.876740   16364 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0910 17:30:08.876763   16364 host.go:66] Checking if "minikube" exists ...
	I0910 17:30:08.876832   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:08.876851   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:08.876883   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:08.877111   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:08.877120   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:08.877125   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:08.877133   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:08.877149   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:08.877157   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:08.877161   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:08.877162   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:08.877196   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:08.877274   16364 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0910 17:30:08.877296   16364 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0910 17:30:08.877328   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:08.877339   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:08.877366   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:08.877532   16364 out.go:177] * Configuring local host environment ...
	I0910 17:30:08.875934   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:08.878059   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:08.874567   16364 addons.go:69] Setting yakd=true in profile "minikube"
	I0910 17:30:08.878772   16364 addons.go:234] Setting addon yakd=true in "minikube"
	I0910 17:30:08.878803   16364 host.go:66] Checking if "minikube" exists ...
	I0910 17:30:08.876459   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:08.878984   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:08.879057   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:08.878833   16364 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0910 17:30:08.879484   16364 mustload.go:65] Loading cluster: minikube
	W0910 17:30:08.879652   16364 out.go:270] * 
	W0910 17:30:08.879671   16364 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0910 17:30:08.879681   16364 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0910 17:30:08.879690   16364 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0910 17:30:08.879701   16364 out.go:270] * 
	W0910 17:30:08.879747   16364 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0910 17:30:08.879759   16364 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0910 17:30:08.879766   16364 out.go:270] * 
	W0910 17:30:08.879789   16364 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0910 17:30:08.879798   16364 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0910 17:30:08.879805   16364 out.go:270] * 
	W0910 17:30:08.879811   16364 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0910 17:30:08.879839   16364 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 17:30:08.881513   16364 out.go:177] * Verifying Kubernetes components...
	I0910 17:30:08.884433   16364 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0910 17:30:08.899134   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:08.899168   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:08.900263   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:08.903842   16364 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:30:08.904113   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:08.904189   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:08.904227   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:08.904348   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:08.904366   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:08.904402   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:08.919859   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:08.920182   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:08.920211   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:08.920253   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:08.922075   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:08.922089   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:08.922090   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:08.936119   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:08.936211   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:08.936730   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:08.936815   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:08.947254   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:08.947291   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:08.947331   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:08.947508   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:08.948123   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:08.964162   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:08.970419   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:08.972760   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:08.972855   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:08.975459   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:08.975526   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:08.994489   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:08.995377   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:08.995437   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:08.995445   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:08.995493   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:08.995611   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:08.995657   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:08.995805   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:08.995821   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:08.996328   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:08.996343   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:08.996682   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:08.996692   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:08.997855   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:08.997900   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:09.005811   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.006641   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.006641   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.008186   16364 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0910 17:30:09.008291   16364 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0910 17:30:09.009327   16364 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0910 17:30:09.015387   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:09.015448   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:09.015609   16364 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0910 17:30:09.015631   16364 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0910 17:30:09.015667   16364 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 17:30:09.015685   16364 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0910 17:30:09.016278   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3124147629 /etc/kubernetes/addons/ig-namespace.yaml
	I0910 17:30:09.016550   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1925213136 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0910 17:30:09.016644   16364 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 17:30:09.016671   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0910 17:30:09.016783   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1815122764 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 17:30:09.027524   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:09.027668   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:09.027698   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:09.027858   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:09.027877   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:09.029707   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:09.029730   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:09.030135   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:09.030148   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:09.030325   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:09.030336   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:09.030815   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:09.030861   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:09.035456   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:09.035473   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:09.035705   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.035926   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.037334   16364 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0910 17:30:09.037373   16364 host.go:66] Checking if "minikube" exists ...
	I0910 17:30:09.037492   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.037507   16364 host.go:66] Checking if "minikube" exists ...
	I0910 17:30:09.038004   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:09.038018   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:09.038053   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:09.038233   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.038841   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:09.038920   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:09.039596   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.041925   16364 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0910 17:30:09.041986   16364 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 17:30:09.044272   16364 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0910 17:30:09.044309   16364 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0910 17:30:09.044457   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3269512169 /etc/kubernetes/addons/yakd-ns.yaml
	I0910 17:30:09.044647   16364 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0910 17:30:09.044809   16364 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:30:09.044826   16364 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0910 17:30:09.044832   16364 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:30:09.044874   16364 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:30:09.046702   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.047024   16364 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0910 17:30:09.047969   16364 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0910 17:30:09.049465   16364 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0910 17:30:09.049545   16364 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0910 17:30:09.053172   16364 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0910 17:30:09.053196   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0910 17:30:09.053688   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3843240921 /etc/kubernetes/addons/volcano-deployment.yaml
	I0910 17:30:09.053877   16364 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0910 17:30:09.055193   16364 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0910 17:30:09.055216   16364 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0910 17:30:09.055491   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2850918806 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0910 17:30:09.057061   16364 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0910 17:30:09.057220   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:09.057241   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:09.060561   16364 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 17:30:09.060592   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0910 17:30:09.060712   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2220982631 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0910 17:30:09.061052   16364 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0910 17:30:09.062508   16364 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0910 17:30:09.063868   16364 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0910 17:30:09.065071   16364 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0910 17:30:09.066368   16364 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0910 17:30:09.066391   16364 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0910 17:30:09.066509   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2134427263 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0910 17:30:09.071503   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:09.071525   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 17:30:09.071554   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:09.071557   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:09.071588   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:09.071591   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.071656   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3475140245 /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:30:09.073367   16364 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0910 17:30:09.073501   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:09.073625   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:09.074738   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:09.074879   16364 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0910 17:30:09.074911   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0910 17:30:09.075055   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1330522 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0910 17:30:09.076245   16364 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 17:30:09.076271   16364 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0910 17:30:09.076389   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube551670870 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0910 17:30:09.077814   16364 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0910 17:30:09.087077   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:09.087105   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:09.087113   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.088883   16364 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0910 17:30:09.092690   16364 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0910 17:30:09.092725   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0910 17:30:09.092883   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1641021759 /etc/kubernetes/addons/deployment.yaml
	I0910 17:30:09.095287   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.097157   16364 out.go:177]   - Using image docker.io/registry:2.8.3
	I0910 17:30:09.100612   16364 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0910 17:30:09.102471   16364 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0910 17:30:09.102528   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0910 17:30:09.103303   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3417615690 /etc/kubernetes/addons/registry-rc.yaml
	I0910 17:30:09.116316   16364 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0910 17:30:09.120137   16364 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 17:30:09.122100   16364 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0910 17:30:09.122135   16364 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0910 17:30:09.122265   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1443422082 /etc/kubernetes/addons/registry-svc.yaml
	I0910 17:30:09.123594   16364 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0910 17:30:09.123624   16364 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0910 17:30:09.123742   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2996210452 /etc/kubernetes/addons/yakd-sa.yaml
	I0910 17:30:09.128179   16364 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0910 17:30:09.128203   16364 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0910 17:30:09.128303   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3091310554 /etc/kubernetes/addons/ig-role.yaml
	I0910 17:30:09.130428   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:09.130454   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:09.133440   16364 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0910 17:30:09.133466   16364 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0910 17:30:09.133584   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3424000724 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0910 17:30:09.135174   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.136267   16364 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0910 17:30:09.136308   16364 host.go:66] Checking if "minikube" exists ...
	I0910 17:30:09.136924   16364 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0910 17:30:09.136943   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0910 17:30:09.136944   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:09.136959   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:09.136990   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:09.137051   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube246309335 /etc/kubernetes/addons/registry-proxy.yaml
	I0910 17:30:09.138182   16364 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0910 17:30:09.138210   16364 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0910 17:30:09.138330   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2760254755 /etc/kubernetes/addons/yakd-crb.yaml
	I0910 17:30:09.146414   16364 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0910 17:30:09.146462   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:09.146481   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:09.146528   16364 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 17:30:09.146620   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:09.146668   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:09.146704   16364 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0910 17:30:09.146723   16364 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0910 17:30:09.146826   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1684922084 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0910 17:30:09.146894   16364 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 17:30:09.146926   16364 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0910 17:30:09.147035   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1989500831 /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 17:30:09.149487   16364 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0910 17:30:09.149512   16364 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0910 17:30:09.149608   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1816850762 /etc/kubernetes/addons/yakd-svc.yaml
	I0910 17:30:09.151161   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.153570   16364 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0910 17:30:09.154898   16364 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0910 17:30:09.154926   16364 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0910 17:30:09.155029   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube943074255 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0910 17:30:09.155694   16364 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0910 17:30:09.155720   16364 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0910 17:30:09.155823   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3837847747 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0910 17:30:09.157385   16364 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0910 17:30:09.167351   16364 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0910 17:30:09.167405   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0910 17:30:09.167524   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3715918310 /etc/kubernetes/addons/yakd-dp.yaml
	I0910 17:30:09.170582   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:09.174550   16364 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0910 17:30:09.174584   16364 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0910 17:30:09.174720   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube841793964 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0910 17:30:09.182124   16364 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0910 17:30:09.182155   16364 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0910 17:30:09.182279   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2542701545 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0910 17:30:09.182582   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:09.182644   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:09.185112   16364 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0910 17:30:09.185140   16364 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0910 17:30:09.185259   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4115677070 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0910 17:30:09.187872   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.190668   16364 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0910 17:30:09.190908   16364 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0910 17:30:09.193443   16364 out.go:177]   - Using image docker.io/busybox:stable
	I0910 17:30:09.194845   16364 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 17:30:09.194877   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0910 17:30:09.194997   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube376046235 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 17:30:09.197684   16364 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0910 17:30:09.197710   16364 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0910 17:30:09.197832   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1860107440 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0910 17:30:09.200769   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:09.200829   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:09.207776   16364 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0910 17:30:09.219436   16364 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0910 17:30:09.219475   16364 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0910 17:30:09.219622   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2542838306 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0910 17:30:09.222841   16364 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0910 17:30:09.222902   16364 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0910 17:30:09.223275   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube203256630 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0910 17:30:09.227589   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:09.227619   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:09.229773   16364 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0910 17:30:09.229806   16364 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0910 17:30:09.229935   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2678765224 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0910 17:30:09.232500   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:09.232545   16364 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 17:30:09.232561   16364 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0910 17:30:09.232569   16364 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0910 17:30:09.232611   16364 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0910 17:30:09.234567   16364 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0910 17:30:09.235831   16364 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0910 17:30:09.243038   16364 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0910 17:30:09.243172   16364 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0910 17:30:09.243466   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4130704951 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0910 17:30:09.249724   16364 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0910 17:30:09.249755   16364 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0910 17:30:09.249903   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1606891582 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0910 17:30:09.260693   16364 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 17:30:09.260852   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1924929374 /etc/kubernetes/addons/storageclass.yaml
	I0910 17:30:09.274258   16364 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0910 17:30:09.274289   16364 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0910 17:30:09.274436   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube556082076 /etc/kubernetes/addons/ig-crd.yaml
	I0910 17:30:09.315398   16364 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0910 17:30:09.315433   16364 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0910 17:30:09.315591   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2569062290 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0910 17:30:09.318783   16364 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0910 17:30:09.318819   16364 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0910 17:30:09.318968   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2543839615 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0910 17:30:09.322310   16364 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 17:30:09.322343   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0910 17:30:09.322647   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1505890648 /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 17:30:09.333345   16364 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0910 17:30:09.350554   16364 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 17:30:09.364054   16364 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:09.364091   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0910 17:30:09.364258   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube722952371 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:09.385780   16364 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0910 17:30:09.385815   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0910 17:30:09.385981   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2904041187 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0910 17:30:09.404581   16364 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0910 17:30:09.408468   16364 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0910 17:30:09.408491   16364 node_ready.go:38] duration metric: took 3.878029ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0910 17:30:09.408501   16364 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:30:09.434694   16364 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:09.435835   16364 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0910 17:30:09.438955   16364 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-p7ksm" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:09.548109   16364 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0910 17:30:09.548150   16364 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0910 17:30:09.548314   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube271541409 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0910 17:30:09.603535   16364 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0910 17:30:09.603571   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0910 17:30:09.603709   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4014726321 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0910 17:30:09.641698   16364 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0910 17:30:09.641732   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0910 17:30:09.641863   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4241772234 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0910 17:30:09.695440   16364 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0910 17:30:09.706407   16364 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 17:30:09.706445   16364 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0910 17:30:09.706589   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3564520501 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 17:30:09.835427   16364 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0910 17:30:10.167170   16364 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.009737302s)
	I0910 17:30:10.167205   16364 addons.go:475] Verifying addon registry=true in "minikube"
	I0910 17:30:10.172175   16364 out.go:177] * Verifying registry addon...
	I0910 17:30:10.180930   16364 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0910 17:30:10.203600   16364 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0910 17:30:10.223500   16364 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0910 17:30:10.223525   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:10.308840   16364 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.117833766s)
	I0910 17:30:10.308879   16364 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0910 17:30:10.429278   16364 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.194670238s)
	I0910 17:30:10.510533   16364 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.274666256s)
	I0910 17:30:10.513188   16364 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.305371248s)
	I0910 17:30:10.516027   16364 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0910 17:30:10.624983   16364 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.18908492s)
	I0910 17:30:10.694790   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:11.154836   16364 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.720092594s)
	W0910 17:30:11.154884   16364 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 17:30:11.154918   16364 retry.go:31] will retry after 330.311808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0910 17:30:11.186947   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:11.450679   16364 pod_ready.go:103] pod "coredns-6f6b679f8f-p7ksm" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:11.485666   16364 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0910 17:30:11.713112   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:12.032741   16364 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.916391807s)
	I0910 17:30:12.185495   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:12.327761   16364 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.492269915s)
	I0910 17:30:12.327793   16364 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0910 17:30:12.329600   16364 out.go:177] * Verifying csi-hostpath-driver addon...
	I0910 17:30:12.332129   16364 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0910 17:30:12.342915   16364 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0910 17:30:12.342940   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:12.685592   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:12.837606   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:13.185599   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:13.336681   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:13.684257   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:13.837450   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:13.944852   16364 pod_ready.go:93] pod "coredns-6f6b679f8f-p7ksm" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:13.944879   16364 pod_ready.go:82] duration metric: took 4.505819671s for pod "coredns-6f6b679f8f-p7ksm" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:13.944891   16364 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-tf45g" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:14.185261   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:14.336283   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:14.532311   16364 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.046584782s)
	I0910 17:30:14.685169   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:14.836936   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:15.185219   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:15.337814   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:15.685804   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:15.836229   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:15.949966   16364 pod_ready.go:103] pod "coredns-6f6b679f8f-tf45g" in "kube-system" namespace has status "Ready":"False"
	I0910 17:30:16.112036   16364 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0910 17:30:16.112188   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3732746271 /var/lib/minikube/google_application_credentials.json
	I0910 17:30:16.122625   16364 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0910 17:30:16.122734   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube942322606 /var/lib/minikube/google_cloud_project
	I0910 17:30:16.131523   16364 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0910 17:30:16.131568   16364 host.go:66] Checking if "minikube" exists ...
	I0910 17:30:16.132024   16364 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0910 17:30:16.132050   16364 api_server.go:166] Checking apiserver status ...
	I0910 17:30:16.132077   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:16.148180   16364 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17651/cgroup
	I0910 17:30:16.158342   16364 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547"
	I0910 17:30:16.158391   16364 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/e3aae09acb674c5f1e4997b577ff61022f799a5511038e2a96915a2170249547/freezer.state
	I0910 17:30:16.167094   16364 api_server.go:204] freezer state: "THAWED"
	I0910 17:30:16.167116   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:16.175106   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:16.175166   16364 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0910 17:30:16.184258   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:16.239920   16364 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0910 17:30:16.336466   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:16.439154   16364 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0910 17:30:16.460226   16364 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0910 17:30:16.460276   16364 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0910 17:30:16.461092   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1739960247 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0910 17:30:16.477482   16364 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0910 17:30:16.477509   16364 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0910 17:30:16.477609   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3606157941 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0910 17:30:16.487818   16364 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 17:30:16.487842   16364 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0910 17:30:16.487936   16364 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4256019727 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 17:30:16.498516   16364 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0910 17:30:16.685141   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:16.836835   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:16.915716   16364 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0910 17:30:16.917333   16364 out.go:177] * Verifying gcp-auth addon...
	I0910 17:30:16.919237   16364 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0910 17:30:16.936539   16364 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0910 17:30:16.950266   16364 pod_ready.go:98] pod "coredns-6f6b679f8f-tf45g" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:16 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.48 HostIPs:[{IP:10.138.0.48}]
PodIP:10.244.0.4 PodIPs:[{IP:10.244.0.4}] StartTime:2024-09-10 17:30:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-10 17:30:09 +0000 UTC,FinishedAt:2024-09-10 17:30:15 +0000 UTC,ContainerID:docker://8578a25b71ef3e3b046c667bf9a6114090b7848cc3d16e8e39010f8328b65c81,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://8578a25b71ef3e3b046c667bf9a6114090b7848cc3d16e8e39010f8328b65c81 Started:0xc001ffc540 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001e10da0} {Name:kube-api-access-9ssgj MountPath:/var/run/secrets/kubernetes.io/serviceaccount R
eadOnly:true RecursiveReadOnly:0xc001e10db0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0910 17:30:16.950298   16364 pod_ready.go:82] duration metric: took 3.005393979s for pod "coredns-6f6b679f8f-tf45g" in "kube-system" namespace to be "Ready" ...
	E0910 17:30:16.950312   16364 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-tf45g" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:16 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:08 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:08 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:08 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-10 17:30:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.
48 HostIPs:[{IP:10.138.0.48}] PodIP:10.244.0.4 PodIPs:[{IP:10.244.0.4}] StartTime:2024-09-10 17:30:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-10 17:30:09 +0000 UTC,FinishedAt:2024-09-10 17:30:15 +0000 UTC,ContainerID:docker://8578a25b71ef3e3b046c667bf9a6114090b7848cc3d16e8e39010f8328b65c81,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://8578a25b71ef3e3b046c667bf9a6114090b7848cc3d16e8e39010f8328b65c81 Started:0xc001ffc540 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001e10da0} {Name:kube-api-access-9ssgj MountPath:/var/run/secrets/k
ubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001e10db0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0910 17:30:16.950324   16364 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:16.953541   16364 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:16.953559   16364 pod_ready.go:82] duration metric: took 3.227588ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:16.953570   16364 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:16.956681   16364 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:16.956695   16364 pod_ready.go:82] duration metric: took 3.117783ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:16.956704   16364 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:16.959966   16364 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:16.959981   16364 pod_ready.go:82] duration metric: took 3.271988ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:16.959989   16364 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mv8w8" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:16.963095   16364 pod_ready.go:93] pod "kube-proxy-mv8w8" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:16.963111   16364 pod_ready.go:82] duration metric: took 3.115399ms for pod "kube-proxy-mv8w8" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:16.963121   16364 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:17.185546   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:17.337940   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:17.349942   16364 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:17.349970   16364 pod_ready.go:82] duration metric: took 386.840303ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:17.349984   16364 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-772g6" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:17.684216   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:17.836123   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:18.148663   16364 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-772g6" in "kube-system" namespace has status "Ready":"True"
	I0910 17:30:18.148685   16364 pod_ready.go:82] duration metric: took 798.692686ms for pod "nvidia-device-plugin-daemonset-772g6" in "kube-system" namespace to be "Ready" ...
	I0910 17:30:18.148692   16364 pod_ready.go:39] duration metric: took 8.740179275s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 17:30:18.148707   16364 api_server.go:52] waiting for apiserver process to appear ...
	I0910 17:30:18.148758   16364 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 17:30:18.165137   16364 api_server.go:72] duration metric: took 9.285267708s to wait for apiserver process to appear ...
	I0910 17:30:18.165157   16364 api_server.go:88] waiting for apiserver healthz status ...
	I0910 17:30:18.165176   16364 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0910 17:30:18.168441   16364 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0910 17:30:18.169215   16364 api_server.go:141] control plane version: v1.31.0
	I0910 17:30:18.169234   16364 api_server.go:131] duration metric: took 4.072287ms to wait for apiserver health ...
	I0910 17:30:18.169242   16364 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 17:30:18.184455   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:18.337516   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:18.354223   16364 system_pods.go:59] 17 kube-system pods found
	I0910 17:30:18.354257   16364 system_pods.go:61] "coredns-6f6b679f8f-p7ksm" [97182a01-b332-4617-adf0-8281ae643671] Running
	I0910 17:30:18.354268   16364 system_pods.go:61] "csi-hostpath-attacher-0" [0c661046-0e6d-417d-86e1-dc9c77058253] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 17:30:18.354284   16364 system_pods.go:61] "csi-hostpath-resizer-0" [69619b8b-1748-468d-a711-48c1c73c045e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 17:30:18.354298   16364 system_pods.go:61] "csi-hostpathplugin-8pxq9" [6058b9c3-d42b-4c73-aa08-82ff8bc6cb41] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 17:30:18.354306   16364 system_pods.go:61] "etcd-ubuntu-20-agent-2" [296f0faf-c107-42e8-bbb9-4a5a56c898fe] Running
	I0910 17:30:18.354310   16364 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [3ca492ab-2446-4197-91fd-b1514fe854a1] Running
	I0910 17:30:18.354314   16364 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [284f4ff3-9299-423c-b22e-6ddc07db36a3] Running
	I0910 17:30:18.354316   16364 system_pods.go:61] "kube-proxy-mv8w8" [45a31af6-b5d9-4e81-b282-d753e0bf0952] Running
	I0910 17:30:18.354320   16364 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [716fcb57-ffef-438f-846b-f6352f587ecd] Running
	I0910 17:30:18.354328   16364 system_pods.go:61] "metrics-server-84c5f94fbc-fqjvs" [b1eca05a-bc88-4568-a1d8-00b1e625553b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 17:30:18.354334   16364 system_pods.go:61] "nvidia-device-plugin-daemonset-772g6" [d5a88152-1da7-4b07-97d5-cc2a32452852] Running
	I0910 17:30:18.354339   16364 system_pods.go:61] "registry-66c9cd494c-g4sqx" [41df7a66-1627-4588-93cb-12aa6056b911] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 17:30:18.354348   16364 system_pods.go:61] "registry-proxy-r8m4h" [9943c8e8-b9ee-43fa-a4eb-dd49156651ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 17:30:18.354367   16364 system_pods.go:61] "snapshot-controller-56fcc65765-cb7p6" [629d82ac-356c-4bec-b57a-dad1f23be058] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:18.354377   16364 system_pods.go:61] "snapshot-controller-56fcc65765-l6jld" [f5cecffa-a468-45c0-a445-967cce8df134] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:18.354381   16364 system_pods.go:61] "storage-provisioner" [335e9e73-e708-40b9-baa8-0f5480398ff1] Running
	I0910 17:30:18.354395   16364 system_pods.go:61] "tiller-deploy-b48cc5f79-kxglv" [428b4e57-5f02-4f6f-a024-e31d48a3c3dc] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0910 17:30:18.354404   16364 system_pods.go:74] duration metric: took 185.156996ms to wait for pod list to return data ...
	I0910 17:30:18.354411   16364 default_sa.go:34] waiting for default service account to be created ...
	I0910 17:30:18.548361   16364 default_sa.go:45] found service account: "default"
	I0910 17:30:18.548388   16364 default_sa.go:55] duration metric: took 193.968932ms for default service account to be created ...
	I0910 17:30:18.548399   16364 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 17:30:18.684872   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:18.755396   16364 system_pods.go:86] 17 kube-system pods found
	I0910 17:30:18.755425   16364 system_pods.go:89] "coredns-6f6b679f8f-p7ksm" [97182a01-b332-4617-adf0-8281ae643671] Running
	I0910 17:30:18.755438   16364 system_pods.go:89] "csi-hostpath-attacher-0" [0c661046-0e6d-417d-86e1-dc9c77058253] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0910 17:30:18.755447   16364 system_pods.go:89] "csi-hostpath-resizer-0" [69619b8b-1748-468d-a711-48c1c73c045e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0910 17:30:18.755460   16364 system_pods.go:89] "csi-hostpathplugin-8pxq9" [6058b9c3-d42b-4c73-aa08-82ff8bc6cb41] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0910 17:30:18.755469   16364 system_pods.go:89] "etcd-ubuntu-20-agent-2" [296f0faf-c107-42e8-bbb9-4a5a56c898fe] Running
	I0910 17:30:18.755483   16364 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [3ca492ab-2446-4197-91fd-b1514fe854a1] Running
	I0910 17:30:18.755492   16364 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [284f4ff3-9299-423c-b22e-6ddc07db36a3] Running
	I0910 17:30:18.755497   16364 system_pods.go:89] "kube-proxy-mv8w8" [45a31af6-b5d9-4e81-b282-d753e0bf0952] Running
	I0910 17:30:18.755506   16364 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [716fcb57-ffef-438f-846b-f6352f587ecd] Running
	I0910 17:30:18.755514   16364 system_pods.go:89] "metrics-server-84c5f94fbc-fqjvs" [b1eca05a-bc88-4568-a1d8-00b1e625553b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0910 17:30:18.755523   16364 system_pods.go:89] "nvidia-device-plugin-daemonset-772g6" [d5a88152-1da7-4b07-97d5-cc2a32452852] Running
	I0910 17:30:18.755532   16364 system_pods.go:89] "registry-66c9cd494c-g4sqx" [41df7a66-1627-4588-93cb-12aa6056b911] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0910 17:30:18.755541   16364 system_pods.go:89] "registry-proxy-r8m4h" [9943c8e8-b9ee-43fa-a4eb-dd49156651ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0910 17:30:18.755550   16364 system_pods.go:89] "snapshot-controller-56fcc65765-cb7p6" [629d82ac-356c-4bec-b57a-dad1f23be058] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:18.755565   16364 system_pods.go:89] "snapshot-controller-56fcc65765-l6jld" [f5cecffa-a468-45c0-a445-967cce8df134] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0910 17:30:18.755571   16364 system_pods.go:89] "storage-provisioner" [335e9e73-e708-40b9-baa8-0f5480398ff1] Running
	I0910 17:30:18.755578   16364 system_pods.go:89] "tiller-deploy-b48cc5f79-kxglv" [428b4e57-5f02-4f6f-a024-e31d48a3c3dc] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0910 17:30:18.755589   16364 system_pods.go:126] duration metric: took 207.183849ms to wait for k8s-apps to be running ...
	I0910 17:30:18.755604   16364 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 17:30:18.755656   16364 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0910 17:30:18.769118   16364 system_svc.go:56] duration metric: took 13.506401ms WaitForService to wait for kubelet
	I0910 17:30:18.769148   16364 kubeadm.go:582] duration metric: took 9.889280857s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 17:30:18.769172   16364 node_conditions.go:102] verifying NodePressure condition ...
	I0910 17:30:18.837762   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:18.949221   16364 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0910 17:30:18.949252   16364 node_conditions.go:123] node cpu capacity is 8
	I0910 17:30:18.949264   16364 node_conditions.go:105] duration metric: took 180.0859ms to run NodePressure ...
	I0910 17:30:18.949279   16364 start.go:241] waiting for startup goroutines ...
	I0910 17:30:19.184352   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:19.336912   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:19.685452   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:19.837185   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:20.184497   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:20.337332   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:20.684414   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:20.838363   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:21.184848   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:21.336670   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:21.685472   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:21.836704   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:22.184593   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:22.340204   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:22.685025   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:22.836901   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:23.185113   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:23.336069   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:23.684409   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:23.837452   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:24.184412   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:24.336803   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:24.684827   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:24.836199   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:25.185031   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:25.336498   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:25.704593   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:25.836116   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:26.184507   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:26.336633   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:26.684393   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:26.852181   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:27.184143   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:27.337319   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:27.684834   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:27.836360   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:28.184759   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:28.335899   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:28.684410   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:28.840356   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:29.184515   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:29.336528   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:29.684124   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:29.837767   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:30.184060   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:30.336782   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:30.684433   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:30.837065   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:31.184852   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:31.336746   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:31.700421   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:31.836525   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:32.184326   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:32.336389   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:32.685201   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:32.837611   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:33.184669   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:33.337705   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:33.684894   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:33.836443   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:34.184907   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:34.336833   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:34.683926   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:34.837075   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:35.184866   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:35.337709   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:35.684899   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0910 17:30:35.837050   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:36.184345   16364 kapi.go:107] duration metric: took 26.003415409s to wait for kubernetes.io/minikube-addons=registry ...
	I0910 17:30:36.336603   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:36.835847   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:37.336643   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:37.837088   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:38.336445   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:38.837140   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:39.337026   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:39.836134   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:40.337381   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:40.836476   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:41.335866   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:41.837025   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:42.337273   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:42.836766   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:43.336214   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:43.837100   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:44.337309   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:44.836934   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:45.336561   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:45.836123   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:46.339280   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:46.835869   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:47.337924   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:47.836137   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:48.337049   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:48.835942   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:49.336983   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:49.836775   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:50.336533   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:50.836298   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:51.336855   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0910 17:30:51.836725   16364 kapi.go:107] duration metric: took 39.504597562s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0910 17:30:58.422232   16364 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0910 17:30:58.422259   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:58.923101   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:59.421819   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:30:59.923259   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:00.421808   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:00.922795   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:01.423117   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:01.923601   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:02.422340   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:02.922286   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:03.421913   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:03.923515   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:04.422421   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:04.922237   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:05.421471   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:05.923006   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:06.423000   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:06.921877   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:07.422584   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:07.922143   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:08.421830   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:08.922973   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:09.422886   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:09.922841   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:10.422580   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:10.922079   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:11.422829   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:11.923234   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:12.422303   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:12.921875   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:13.423587   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:13.923133   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:14.421869   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:14.923171   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:15.423058   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:15.922556   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:16.422264   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:16.921940   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:17.422889   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:17.923026   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:18.422623   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:18.922456   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:19.422439   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:19.922324   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:20.422625   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:20.923246   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:21.422408   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:21.922684   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:22.423695   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:22.922802   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:23.422356   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:23.922306   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:24.422252   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:24.922373   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:25.422231   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:25.922172   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:26.422203   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:26.922236   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:27.421937   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:27.923142   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:28.421872   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:28.923026   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:29.421761   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:29.923054   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:30.423172   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:30.922045   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:31.423031   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:31.921878   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:32.423329   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:32.922874   16364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0910 17:31:33.422902   16364 kapi.go:107] duration metric: took 1m16.503672781s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0910 17:31:33.424490   16364 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0910 17:31:33.425752   16364 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0910 17:31:33.426755   16364 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0910 17:31:33.427917   16364 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, storage-provisioner, metrics-server, helm-tiller, yakd, storage-provisioner-rancher, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0910 17:31:33.429396   16364 addons.go:510] duration metric: took 1m24.554937205s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner storage-provisioner metrics-server helm-tiller yakd storage-provisioner-rancher inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0910 17:31:33.429435   16364 start.go:246] waiting for cluster config update ...
	I0910 17:31:33.429452   16364 start.go:255] writing updated cluster config ...
	I0910 17:31:33.429667   16364 exec_runner.go:51] Run: rm -f paused
	I0910 17:31:33.474160   16364 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0910 17:31:33.475625   16364 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Mon 2024-07-29 23:03:03 UTC, end at Tue 2024-09-10 17:41:26 UTC. --
	Sep 10 17:32:57 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:32:57.591196461Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 10 17:33:51 ubuntu-20-agent-2 cri-dockerd[16927]: time="2024-09-10T17:33:51Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 10 17:33:52 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:33:52.384979759Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 10 17:33:52 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:33:52.387051634Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 10 17:33:52 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:33:52.834614537Z" level=info msg="ignoring event" container=9549554aacdebd12aca3b1268e60a41d0a15133e5cdb34f083fb4b1176567313 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:35:21 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:35:21.406164928Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 10 17:35:21 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:35:21.408278136Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 10 17:36:44 ubuntu-20-agent-2 cri-dockerd[16927]: time="2024-09-10T17:36:44Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 10 17:36:45 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:36:45.847077787Z" level=info msg="ignoring event" container=6b36ce720ebf90dfb20b883d70d4d227b52c7bce4abfa909cc3e9bae458b260d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:36:54 ubuntu-20-agent-2 cri-dockerd[16927]: time="2024-09-10T17:36:54Z" level=error msg="error getting RW layer size for container ID '9549554aacdebd12aca3b1268e60a41d0a15133e5cdb34f083fb4b1176567313': Error response from daemon: No such container: 9549554aacdebd12aca3b1268e60a41d0a15133e5cdb34f083fb4b1176567313"
	Sep 10 17:36:54 ubuntu-20-agent-2 cri-dockerd[16927]: time="2024-09-10T17:36:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9549554aacdebd12aca3b1268e60a41d0a15133e5cdb34f083fb4b1176567313'"
	Sep 10 17:38:08 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:38:08.392771801Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 10 17:38:08 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:38:08.394985112Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 10 17:40:25 ubuntu-20-agent-2 cri-dockerd[16927]: time="2024-09-10T17:40:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f76d65c416ebf9d77cb8a754e1c7d26fa2a03ba11d9acb6848fc4d9a52fcdd34/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 10 17:40:26 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:40:26.180589783Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 10 17:40:26 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:40:26.182710734Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 10 17:40:40 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:40:40.384838868Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 10 17:40:40 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:40:40.387147920Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 10 17:41:05 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:41:05.403630906Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 10 17:41:05 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:41:05.405780488Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 10 17:41:25 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:41:25.648589612Z" level=info msg="ignoring event" container=f76d65c416ebf9d77cb8a754e1c7d26fa2a03ba11d9acb6848fc4d9a52fcdd34 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:41:25 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:41:25.884532159Z" level=info msg="ignoring event" container=8d3ef1de6a00f96da1b83fcdb91632863d668884871ed0a49882df5bfc2df721 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:41:25 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:41:25.944505715Z" level=info msg="ignoring event" container=c55518b49a3a6a3a6fc56a4e6a8335a1f6126e53d1b0d12d5713cedaea8abee7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:41:26 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:41:26.017432077Z" level=info msg="ignoring event" container=5e01602f1fdbd89f5fb16be159115d4d93b78aad33145437a7193d0b6db02184 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 17:41:26 ubuntu-20-agent-2 dockerd[16581]: time="2024-09-10T17:41:26.100991187Z" level=info msg="ignoring event" container=5189fe5cc86ce322a3e04372303561abd56ebe1dc4190277b2848c325f26c5f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	6b36ce720ebf9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            4 minutes ago       Exited              gadget                                   6                   a3f1d4763af51       gadget-mfgf7
	06e0dc7291e23       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   793dca004ebb1       gcp-auth-89d5ffd79-d7nlc
	a7f7c3619e9fa       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   b42ef9344e803       csi-hostpathplugin-8pxq9
	6a7711696aaf4       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   b42ef9344e803       csi-hostpathplugin-8pxq9
	cc74cd706be49       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   b42ef9344e803       csi-hostpathplugin-8pxq9
	7a71bcb243682       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   b42ef9344e803       csi-hostpathplugin-8pxq9
	434a6ca91cb64       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   b42ef9344e803       csi-hostpathplugin-8pxq9
	774e8b2136a92       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   298ff13412b29       csi-hostpath-resizer-0
	70c88334fd04f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   b42ef9344e803       csi-hostpathplugin-8pxq9
	c27d97802aebe       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   f012d9bd9130b       csi-hostpath-attacher-0
	19692dbeb97d6       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   bd45a8ad82610       snapshot-controller-56fcc65765-cb7p6
	c8041b4d4e390       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   edca123cecba7       snapshot-controller-56fcc65765-l6jld
	2ee26ee65d90f       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   a8c77d0a8e93e       yakd-dashboard-67d98fc6b-m64k5
	fe463c3914206       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   79bd23e4dfb65       local-path-provisioner-86d989889c-p4pb9
	9ac93ae751c6f       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  11 minutes ago      Running             tiller                                   0                   0b87ef02a69bc       tiller-deploy-b48cc5f79-kxglv
	76922941791c4       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   fc99ccf9c01cc       metrics-server-84c5f94fbc-fqjvs
	c1ec04a471df3       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               11 minutes ago      Running             cloud-spanner-emulator                   0                   683c287bdfdc6       cloud-spanner-emulator-769b77f747-xdj75
	f36d14dd6ccf9       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   6fc2c5153103a       nvidia-device-plugin-daemonset-772g6
	3b13242e9e472       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   8783f8b6267b0       storage-provisioner
	8ae250146484e       cbb01a7bd410d                                                                                                                                11 minutes ago      Running             coredns                                  0                   5f32e458471b0       coredns-6f6b679f8f-p7ksm
	4cb4772fdc46e       ad83b2ca7b09e                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   8a3ba72d961e4       kube-proxy-mv8w8
	fbc93f59c9c1a       045733566833c                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   4d0a8dca63531       kube-controller-manager-ubuntu-20-agent-2
	6882a413f4a2d       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   1de700fc56489       etcd-ubuntu-20-agent-2
	fc1421e541447       1766f54c897f0                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   cac81d34a77f0       kube-scheduler-ubuntu-20-agent-2
	e3aae09acb674       604f5db92eaa8                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   af7caf614a27e       kube-apiserver-ubuntu-20-agent-2
	
	
	==> coredns [8ae250146484] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:40047 - 23769 "HINFO IN 7178463509593840035.3055487949674451791. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036347615s
	[INFO] 10.244.0.24:60082 - 48054 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000289442s
	[INFO] 10.244.0.24:47685 - 35780 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000226478s
	[INFO] 10.244.0.24:36680 - 8168 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011333s
	[INFO] 10.244.0.24:45862 - 63828 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120011s
	[INFO] 10.244.0.24:45668 - 25685 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116568s
	[INFO] 10.244.0.24:52702 - 59974 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009885s
	[INFO] 10.244.0.24:42250 - 53942 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.002877424s
	[INFO] 10.244.0.24:40358 - 10080 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003024911s
	[INFO] 10.244.0.24:54697 - 42130 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00312878s
	[INFO] 10.244.0.24:58709 - 53092 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003192562s
	[INFO] 10.244.0.24:54967 - 10616 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002197128s
	[INFO] 10.244.0.24:46147 - 13715 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00359813s
	[INFO] 10.244.0.24:51325 - 54455 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.00132778s
	[INFO] 10.244.0.24:40823 - 64315 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002313852s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37b4bace07cd53444288cad630e4db4b688b8c18
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_10T17_30_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Sep 2024 17:30:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Sep 2024 17:41:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Sep 2024 17:37:10 +0000   Tue, 10 Sep 2024 17:29:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Sep 2024 17:37:10 +0000   Tue, 10 Sep 2024 17:29:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Sep 2024 17:37:10 +0000   Tue, 10 Sep 2024 17:29:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Sep 2024 17:37:10 +0000   Tue, 10 Sep 2024 17:30:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    fd5c8b9a-1c5d-42ce-95a5-c4b9c271309e
	  Kernel Version:             5.15.0-1067-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-769b77f747-xdj75      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-mfgf7                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-d7nlc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-p7ksm                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-8pxq9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-mv8w8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-fqjvs              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-772g6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-cb7p6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-l6jld         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 tiller-deploy-b48cc5f79-kxglv                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-p4pb9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-m64k5               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 2e c8 1f 61 98 08 06
	[  +0.015838] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 13 8f a4 92 b2 08 06
	[  +2.718023] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 a4 14 21 6a 95 08 06
	[  +1.847101] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 40 85 8a cc fe 08 06
	[  +2.257262] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 d0 56 d6 6d 77 08 06
	[  +4.094358] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e d0 a0 69 70 48 08 06
	[  +0.882623] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 a0 2c 08 09 36 08 06
	[  +0.033034] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 d2 ae 4c 16 22 08 06
	[  +0.403362] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 80 86 7f 1c 58 08 06
	[Sep10 17:31] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ee 74 af aa 4f 0f 08 06
	[  +0.029593] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000012] ll header: 00000000: ff ff ff ff ff ff fa 89 7e dd 03 66 08 06
	[ +11.034700] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e a4 5a 0c 82 b1 08 06
	[  +0.000472] IPv4: martian source 10.244.0.24 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 d7 5b e0 fb 60 08 06
	
	
	==> etcd [6882a413f4a2] <==
	{"level":"info","ts":"2024-09-10T17:29:59.365410Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b435b960bec7c3c","initial-advertise-peer-urls":["https://10.138.0.48:2380"],"listen-peer-urls":["https://10.138.0.48:2380"],"advertise-client-urls":["https://10.138.0.48:2379"],"listen-client-urls":["https://10.138.0.48:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-10T17:29:59.365442Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-10T17:29:59.850374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-10T17:29:59.850436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-10T17:29:59.850465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
	{"level":"info","ts":"2024-09-10T17:29:59.850481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
	{"level":"info","ts":"2024-09-10T17:29:59.850492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-10T17:29:59.850507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-10T17:29:59.850520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-10T17:29:59.851388Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T17:29:59.851904Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-10T17:29:59.851938Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T17:29:59.852005Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-10T17:29:59.852136Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-10T17:29:59.852159Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-10T17:29:59.852283Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T17:29:59.852378Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T17:29:59.852447Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-10T17:29:59.853018Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T17:29:59.853020Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-10T17:29:59.853876Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-10T17:29:59.853891Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-10T17:39:59.870976Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1757}
	{"level":"info","ts":"2024-09-10T17:39:59.893151Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1757,"took":"21.608799ms","hash":913925951,"current-db-size-bytes":8462336,"current-db-size":"8.5 MB","current-db-size-in-use-bytes":4476928,"current-db-size-in-use":"4.5 MB"}
	{"level":"info","ts":"2024-09-10T17:39:59.893191Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":913925951,"revision":1757,"compact-revision":-1}
	
	
	==> gcp-auth [06e0dc7291e2] <==
	2024/09/10 17:31:32 GCP Auth Webhook started!
	2024/09/10 17:31:49 Ready to marshal response ...
	2024/09/10 17:31:49 Ready to write response ...
	2024/09/10 17:31:50 Ready to marshal response ...
	2024/09/10 17:31:50 Ready to write response ...
	2024/09/10 17:32:12 Ready to marshal response ...
	2024/09/10 17:32:12 Ready to write response ...
	2024/09/10 17:32:13 Ready to marshal response ...
	2024/09/10 17:32:13 Ready to write response ...
	2024/09/10 17:32:13 Ready to marshal response ...
	2024/09/10 17:32:13 Ready to write response ...
	2024/09/10 17:40:25 Ready to marshal response ...
	2024/09/10 17:40:25 Ready to write response ...
	
	
	==> kernel <==
	 17:41:26 up 23 min,  0 users,  load average: 0.10, 0.20, 0.19
	Linux ubuntu-20-agent-2 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [e3aae09acb67] <==
	W0910 17:30:52.497357       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.35.228:443: connect: connection refused
	W0910 17:30:57.922460       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.250.46:443: connect: connection refused
	E0910 17:30:57.922504       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.250.46:443: connect: connection refused" logger="UnhandledError"
	W0910 17:31:19.941450       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.250.46:443: connect: connection refused
	E0910 17:31:19.941487       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.250.46:443: connect: connection refused" logger="UnhandledError"
	W0910 17:31:19.948359       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.250.46:443: connect: connection refused
	E0910 17:31:19.948397       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.250.46:443: connect: connection refused" logger="UnhandledError"
	I0910 17:31:49.727709       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0910 17:31:49.744319       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0910 17:32:03.113676       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0910 17:32:03.122650       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0910 17:32:03.190976       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0910 17:32:03.201642       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0910 17:32:03.222577       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0910 17:32:03.308585       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0910 17:32:03.385819       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0910 17:32:03.449761       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0910 17:32:03.475604       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0910 17:32:04.138065       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0910 17:32:04.308849       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0910 17:32:04.348865       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0910 17:32:04.355807       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0910 17:32:04.476545       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0910 17:32:04.500636       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0910 17:32:04.656272       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [fbc93f59c9c1] <==
	W0910 17:40:18.063326       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:40:18.063393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:40:30.605867       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:40:30.605912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:40:38.007592       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:40:38.007633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:40:42.254993       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:40:42.255046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:40:51.558403       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:40:51.558443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:40:58.503686       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:40:58.503728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:41:07.990845       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:41:07.990883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:41:10.927067       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:41:10.927107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:41:11.699825       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:41:11.699866       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:41:14.539609       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:41:14.539650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:41:22.516539       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:41:22.516587       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0910 17:41:22.524864       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0910 17:41:22.524904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0910 17:41:25.851196       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.248µs"
	
	
	==> kube-proxy [4cb4772fdc46] <==
	I0910 17:30:09.119807       1 server_linux.go:66] "Using iptables proxy"
	I0910 17:30:09.282950       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0910 17:30:09.283282       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0910 17:30:09.349368       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0910 17:30:09.349451       1 server_linux.go:169] "Using iptables Proxier"
	I0910 17:30:09.353003       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0910 17:30:09.353505       1 server.go:483] "Version info" version="v1.31.0"
	I0910 17:30:09.353538       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 17:30:09.363057       1 config.go:197] "Starting service config controller"
	I0910 17:30:09.363098       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0910 17:30:09.363130       1 config.go:104] "Starting endpoint slice config controller"
	I0910 17:30:09.363135       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0910 17:30:09.365901       1 config.go:326] "Starting node config controller"
	I0910 17:30:09.365939       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0910 17:30:09.464116       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0910 17:30:09.464187       1 shared_informer.go:320] Caches are synced for service config
	I0910 17:30:09.466104       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [fc1421e54144] <==
	W0910 17:30:00.738081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0910 17:30:00.737992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 17:30:00.738111       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0910 17:30:00.738133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:00.738153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 17:30:00.738176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:00.738206       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0910 17:30:00.738230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:00.738264       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0910 17:30:00.738274       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0910 17:30:00.738286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0910 17:30:00.738291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0910 17:30:00.738302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0910 17:30:00.738287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.574338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0910 17:30:01.574377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.628706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 17:30:01.628740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.781687       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 17:30:01.781731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.855297       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 17:30:01.855346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0910 17:30:01.868620       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0910 17:30:01.868660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0910 17:30:02.434858       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Mon 2024-07-29 23:03:03 UTC, end at Tue 2024-09-10 17:41:26 UTC. --
	Sep 10 17:41:05 ubuntu-20-agent-2 kubelet[17819]: E0910 17:41:05.406265   17819 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" image="gcr.io/k8s-minikube/busybox:latest"
	Sep 10 17:41:05 ubuntu-20-agent-2 kubelet[17819]: E0910 17:41:05.406406   17819 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:registry-test,Image:gcr.io/k8s-minikube/busybox,Command:[],Args:[sh -c wget --spider -S http://registry.kube-system.svc.cluster.local],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6kshl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:n
il,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:true,StdinOnce:true,TTY:true,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod registry-test_default(8af4eb15-8235-4e01-a946-dfa1240e477b): ErrImagePull: Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" logger="UnhandledError"
	Sep 10 17:41:05 ubuntu-20-agent-2 kubelet[17819]: E0910 17:41:05.407563   17819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ErrImagePull: \"Error response from daemon: Head \\\"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\\\": unauthorized: authentication failed\"" pod="default/registry-test" podUID="8af4eb15-8235-4e01-a946-dfa1240e477b"
	Sep 10 17:41:10 ubuntu-20-agent-2 kubelet[17819]: E0910 17:41:10.245127   17819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="200191ea-66f6-44b0-8f2d-46a310cc491d"
	Sep 10 17:41:18 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:18.243846   17819 scope.go:117] "RemoveContainer" containerID="6b36ce720ebf90dfb20b883d70d4d227b52c7bce4abfa909cc3e9bae458b260d"
	Sep 10 17:41:18 ubuntu-20-agent-2 kubelet[17819]: E0910 17:41:18.244177   17819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-mfgf7_gadget(5a9e62c0-7b2d-41cb-b8dc-38cbbaa81561)\"" pod="gadget/gadget-mfgf7" podUID="5a9e62c0-7b2d-41cb-b8dc-38cbbaa81561"
	Sep 10 17:41:18 ubuntu-20-agent-2 kubelet[17819]: E0910 17:41:18.248101   17819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="8af4eb15-8235-4e01-a946-dfa1240e477b"
	Sep 10 17:41:22 ubuntu-20-agent-2 kubelet[17819]: E0910 17:41:22.245439   17819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="200191ea-66f6-44b0-8f2d-46a310cc491d"
	Sep 10 17:41:25 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:25.766668   17819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kshl\" (UniqueName: \"kubernetes.io/projected/8af4eb15-8235-4e01-a946-dfa1240e477b-kube-api-access-6kshl\") pod \"8af4eb15-8235-4e01-a946-dfa1240e477b\" (UID: \"8af4eb15-8235-4e01-a946-dfa1240e477b\") "
	Sep 10 17:41:25 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:25.766703   17819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8af4eb15-8235-4e01-a946-dfa1240e477b-gcp-creds\") pod \"8af4eb15-8235-4e01-a946-dfa1240e477b\" (UID: \"8af4eb15-8235-4e01-a946-dfa1240e477b\") "
	Sep 10 17:41:25 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:25.766789   17819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8af4eb15-8235-4e01-a946-dfa1240e477b-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "8af4eb15-8235-4e01-a946-dfa1240e477b" (UID: "8af4eb15-8235-4e01-a946-dfa1240e477b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 10 17:41:25 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:25.768580   17819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af4eb15-8235-4e01-a946-dfa1240e477b-kube-api-access-6kshl" (OuterVolumeSpecName: "kube-api-access-6kshl") pod "8af4eb15-8235-4e01-a946-dfa1240e477b" (UID: "8af4eb15-8235-4e01-a946-dfa1240e477b"). InnerVolumeSpecName "kube-api-access-6kshl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:41:25 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:25.867836   17819 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6kshl\" (UniqueName: \"kubernetes.io/projected/8af4eb15-8235-4e01-a946-dfa1240e477b-kube-api-access-6kshl\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 10 17:41:25 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:25.867879   17819 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8af4eb15-8235-4e01-a946-dfa1240e477b-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 10 17:41:26 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:26.142825   17819 scope.go:117] "RemoveContainer" containerID="c55518b49a3a6a3a6fc56a4e6a8335a1f6126e53d1b0d12d5713cedaea8abee7"
	Sep 10 17:41:26 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:26.158456   17819 scope.go:117] "RemoveContainer" containerID="8d3ef1de6a00f96da1b83fcdb91632863d668884871ed0a49882df5bfc2df721"
	Sep 10 17:41:26 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:26.170998   17819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qllbh\" (UniqueName: \"kubernetes.io/projected/41df7a66-1627-4588-93cb-12aa6056b911-kube-api-access-qllbh\") pod \"41df7a66-1627-4588-93cb-12aa6056b911\" (UID: \"41df7a66-1627-4588-93cb-12aa6056b911\") "
	Sep 10 17:41:26 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:26.173705   17819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41df7a66-1627-4588-93cb-12aa6056b911-kube-api-access-qllbh" (OuterVolumeSpecName: "kube-api-access-qllbh") pod "41df7a66-1627-4588-93cb-12aa6056b911" (UID: "41df7a66-1627-4588-93cb-12aa6056b911"). InnerVolumeSpecName "kube-api-access-qllbh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:41:26 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:26.178992   17819 scope.go:117] "RemoveContainer" containerID="8d3ef1de6a00f96da1b83fcdb91632863d668884871ed0a49882df5bfc2df721"
	Sep 10 17:41:26 ubuntu-20-agent-2 kubelet[17819]: E0910 17:41:26.180144   17819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8d3ef1de6a00f96da1b83fcdb91632863d668884871ed0a49882df5bfc2df721" containerID="8d3ef1de6a00f96da1b83fcdb91632863d668884871ed0a49882df5bfc2df721"
	Sep 10 17:41:26 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:26.180190   17819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8d3ef1de6a00f96da1b83fcdb91632863d668884871ed0a49882df5bfc2df721"} err="failed to get container status \"8d3ef1de6a00f96da1b83fcdb91632863d668884871ed0a49882df5bfc2df721\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8d3ef1de6a00f96da1b83fcdb91632863d668884871ed0a49882df5bfc2df721"
	Sep 10 17:41:26 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:26.272265   17819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2hlv\" (UniqueName: \"kubernetes.io/projected/9943c8e8-b9ee-43fa-a4eb-dd49156651ce-kube-api-access-h2hlv\") pod \"9943c8e8-b9ee-43fa-a4eb-dd49156651ce\" (UID: \"9943c8e8-b9ee-43fa-a4eb-dd49156651ce\") "
	Sep 10 17:41:26 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:26.272388   17819 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qllbh\" (UniqueName: \"kubernetes.io/projected/41df7a66-1627-4588-93cb-12aa6056b911-kube-api-access-qllbh\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 10 17:41:26 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:26.274363   17819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9943c8e8-b9ee-43fa-a4eb-dd49156651ce-kube-api-access-h2hlv" (OuterVolumeSpecName: "kube-api-access-h2hlv") pod "9943c8e8-b9ee-43fa-a4eb-dd49156651ce" (UID: "9943c8e8-b9ee-43fa-a4eb-dd49156651ce"). InnerVolumeSpecName "kube-api-access-h2hlv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 17:41:26 ubuntu-20-agent-2 kubelet[17819]: I0910 17:41:26.372952   17819 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-h2hlv\" (UniqueName: \"kubernetes.io/projected/9943c8e8-b9ee-43fa-a4eb-dd49156651ce-kube-api-access-h2hlv\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	
	
	==> storage-provisioner [3b13242e9e47] <==
	I0910 17:30:11.294858       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 17:30:11.308052       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 17:30:11.308123       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 17:30:11.316768       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 17:30:11.316918       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e32ffb0a-0e3f-46e7-a49f-ada50a70d4c8!
	I0910 17:30:11.318003       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0cd1843a-6e0f-4cf4-be9b-dec88ea285ca", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_e32ffb0a-0e3f-46e7-a49f-ada50a70d4c8 became leader
	I0910 17:30:11.418379       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e32ffb0a-0e3f-46e7-a49f-ada50a70d4c8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox registry-proxy-r8m4h
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox registry-proxy-r8m4h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context minikube describe pod busybox registry-proxy-r8m4h: exit status 1 (66.553479ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-2/10.138.0.48
	Start Time:       Tue, 10 Sep 2024 17:32:13 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bl7gn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bl7gn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m14s                   default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-2
	  Normal   Pulling    7m35s (x4 over 9m14s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m35s (x4 over 9m14s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m35s (x4 over 9m14s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m22s (x6 over 9m13s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m10s (x20 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "registry-proxy-r8m4h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context minikube describe pod busybox registry-proxy-r8m4h: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.73s)

                                                
                                    

Test pass (111/168)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 2.47
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 1.03
15 TestDownloadOnly/v1.31.0/binaries 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.05
18 TestDownloadOnly/v1.31.0/DeleteAll 0.11
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.54
22 TestOffline 41.88
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
27 TestAddons/Setup 102.06
29 TestAddons/serial/Volcano 39.41
31 TestAddons/serial/GCPAuth/Namespaces 0.11
35 TestAddons/parallel/InspektorGadget 10.44
36 TestAddons/parallel/MetricsServer 5.38
37 TestAddons/parallel/HelmTiller 10.03
39 TestAddons/parallel/CSI 48.99
40 TestAddons/parallel/Headlamp 15.85
41 TestAddons/parallel/CloudSpanner 5.24
43 TestAddons/parallel/NvidiaDevicePlugin 5.21
44 TestAddons/parallel/Yakd 10.39
45 TestAddons/StoppedEnableDisable 10.74
47 TestCertExpiration 227.92
58 TestFunctional/serial/CopySyncFile 0
59 TestFunctional/serial/StartWithProxy 28.27
60 TestFunctional/serial/AuditLog 0
61 TestFunctional/serial/SoftStart 33.29
62 TestFunctional/serial/KubeContext 0.04
63 TestFunctional/serial/KubectlGetPods 0.06
65 TestFunctional/serial/MinikubeKubectlCmd 0.1
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
67 TestFunctional/serial/ExtraConfig 37.03
68 TestFunctional/serial/ComponentHealth 0.07
69 TestFunctional/serial/LogsCmd 0.79
70 TestFunctional/serial/LogsFileCmd 0.82
71 TestFunctional/serial/InvalidService 3.99
73 TestFunctional/parallel/ConfigCmd 0.26
74 TestFunctional/parallel/DashboardCmd 6.63
75 TestFunctional/parallel/DryRun 0.15
76 TestFunctional/parallel/InternationalLanguage 0.08
77 TestFunctional/parallel/StatusCmd 0.4
80 TestFunctional/parallel/ProfileCmd/profile_not_create 0.23
81 TestFunctional/parallel/ProfileCmd/profile_list 0.21
82 TestFunctional/parallel/ProfileCmd/profile_json_output 0.21
84 TestFunctional/parallel/ServiceCmd/DeployApp 8.14
85 TestFunctional/parallel/ServiceCmd/List 0.33
86 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
87 TestFunctional/parallel/ServiceCmd/HTTPS 0.15
88 TestFunctional/parallel/ServiceCmd/Format 0.15
89 TestFunctional/parallel/ServiceCmd/URL 0.15
90 TestFunctional/parallel/ServiceCmdConnect 7.29
91 TestFunctional/parallel/AddonsCmd 0.11
92 TestFunctional/parallel/PersistentVolumeClaim 22.82
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.26
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.18
99 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
100 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
104 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
107 TestFunctional/parallel/MySQL 21.25
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 14.5
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 14.31
116 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/Version/short 0.04
121 TestFunctional/parallel/Version/components 0.37
122 TestFunctional/parallel/License 0.25
123 TestFunctional/delete_echo-server_images 0.03
124 TestFunctional/delete_my-image_image 0.01
125 TestFunctional/delete_minikube_cached_images 0.01
130 TestImageBuild/serial/Setup 13.94
131 TestImageBuild/serial/NormalBuild 1.65
132 TestImageBuild/serial/BuildWithBuildArg 0.77
133 TestImageBuild/serial/BuildWithDockerIgnore 0.55
134 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.55
138 TestJSONOutput/start/Command 30.27
139 TestJSONOutput/start/Audit 0
141 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
142 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
144 TestJSONOutput/pause/Command 0.49
145 TestJSONOutput/pause/Audit 0
147 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/unpause/Command 0.39
151 TestJSONOutput/unpause/Audit 0
153 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/stop/Command 10.45
157 TestJSONOutput/stop/Audit 0
159 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
161 TestErrorJSONOutput 0.18
166 TestMainNoArgs 0.04
167 TestMinikubeProfile 33.37
175 TestPause/serial/Start 28.87
176 TestPause/serial/SecondStartNoReconfiguration 25.51
177 TestPause/serial/Pause 0.49
178 TestPause/serial/VerifyStatus 0.13
179 TestPause/serial/Unpause 0.39
180 TestPause/serial/PauseAgain 0.52
181 TestPause/serial/DeletePaused 1.7
182 TestPause/serial/VerifyDeletedResources 0.06
196 TestRunningBinaryUpgrade 72.07
198 TestStoppedBinaryUpgrade/Setup 1.16
199 TestStoppedBinaryUpgrade/Upgrade 49.79
200 TestStoppedBinaryUpgrade/MinikubeLogs 0.79
201 TestKubernetesUpgrade 304.99
x
+
TestDownloadOnly/v1.20.0/json-events (2.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (2.46923854s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (2.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (52.896884ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:29:04
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:29:04.640997   12644 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:29:04.641112   12644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:04.641122   12644 out.go:358] Setting ErrFile to fd 2...
	I0910 17:29:04.641126   12644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:04.641277   12644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5822/.minikube/bin
	W0910 17:29:04.641433   12644 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19598-5822/.minikube/config/config.json: open /home/jenkins/minikube-integration/19598-5822/.minikube/config/config.json: no such file or directory
	I0910 17:29:04.641955   12644 out.go:352] Setting JSON to true
	I0910 17:29:04.642923   12644 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":694,"bootTime":1725988651,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:29:04.642988   12644 start.go:139] virtualization: kvm guest
	I0910 17:29:04.645390   12644 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0910 17:29:04.645473   12644 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19598-5822/.minikube/cache/preloaded-tarball: no such file or directory
	I0910 17:29:04.645502   12644 notify.go:220] Checking for updates...
	I0910 17:29:04.646702   12644 out.go:169] MINIKUBE_LOCATION=19598
	I0910 17:29:04.647980   12644 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:29:04.649309   12644 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19598-5822/kubeconfig
	I0910 17:29:04.650635   12644 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5822/.minikube
	I0910 17:29:04.651865   12644 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (1.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.034781377s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (1.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
--- PASS: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (53.347715ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC | 10 Sep 24 17:29 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 10 Sep 24 17:29 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/10 17:29:07
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 17:29:07.400078   12801 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:29:07.400312   12801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:07.400325   12801 out.go:358] Setting ErrFile to fd 2...
	I0910 17:29:07.400330   12801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:29:07.400537   12801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5822/.minikube/bin
	I0910 17:29:07.401129   12801 out.go:352] Setting JSON to true
	I0910 17:29:07.401975   12801 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":696,"bootTime":1725988651,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:29:07.402033   12801 start.go:139] virtualization: kvm guest
	I0910 17:29:07.404054   12801 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0910 17:29:07.404143   12801 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19598-5822/.minikube/cache/preloaded-tarball: no such file or directory
	I0910 17:29:07.404189   12801 notify.go:220] Checking for updates...
	I0910 17:29:07.405387   12801 out.go:169] MINIKUBE_LOCATION=19598
	I0910 17:29:07.406706   12801 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:29:07.407817   12801 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19598-5822/kubeconfig
	I0910 17:29:07.409049   12801 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5822/.minikube
	I0910 17:29:07.410185   12801 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:33859 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (41.88s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (40.32768232s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.547332343s)
--- PASS: TestOffline (41.88s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (44.013313ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (42.761917ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (102.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller: (1m42.063216664s)
--- PASS: TestAddons/Setup (102.06s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.41s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 8.153468ms
addons_test.go:897: volcano-scheduler stabilized in 8.273432ms
addons_test.go:905: volcano-admission stabilized in 8.322006ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-flhf9" [15c5e5e3-64b7-4012-b489-88e2c7b7a058] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004117251s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-c2jvd" [db339d17-6462-48e2-9a0f-0e103ae35a6d] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004048889s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-cpr2d" [05d5dc70-6b53-4792-a071-b2011b39f19a] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003149614s
addons_test.go:932: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [49a5340d-49b3-4528-9d73-5a5aa2f0ddd0] Pending
helpers_test.go:344: "test-job-nginx-0" [49a5340d-49b3-4528-9d73-5a5aa2f0ddd0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [49a5340d-49b3-4528-9d73-5a5aa2f0ddd0] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003475804s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.090464549s)
--- PASS: TestAddons/serial/Volcano (39.41s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.44s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-mfgf7" [5a9e62c0-7b2d-41cb-b8dc-38cbbaa81561] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003861628s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.433863298s)
--- PASS: TestAddons/parallel/InspektorGadget (10.44s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.874783ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-fqjvs" [b1eca05a-bc88-4568-a1d8-00b1e625553b] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003428239s
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.38s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.03s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.877278ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-kxglv" [428b4e57-5f02-4f6f-a024-e31d48a3c3dc] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003393749s
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.75844915s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.03s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.384256ms
addons_test.go:570: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3e29b814-dbf8-4377-a768-6792c9e9cd36] Pending
helpers_test.go:344: "task-pv-pod" [3e29b814-dbf8-4377-a768-6792c9e9cd36] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3e29b814-dbf8-4377-a768-6792c9e9cd36] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.002787006s
addons_test.go:590: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d9039431-b493-4041-9123-77f0ea8468c3] Pending
helpers_test.go:344: "task-pv-pod-restore" [d9039431-b493-4041-9123-77f0ea8468c3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d9039431-b493-4041-9123-77f0ea8468c3] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003798387s
addons_test.go:632: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.243615688s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.99s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-b76dv" [45e49202-8eff-4d58-b79e-7ca2fd18da94] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-b76dv" [45e49202-8eff-4d58-b79e-7ca2fd18da94] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003587195s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.378771506s)
--- PASS: TestAddons/parallel/Headlamp (15.85s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-xdj75" [2ce6640b-95e9-4637-ac8b-90024043bf49] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003374396s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.21s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-772g6" [d5a88152-1da7-4b07-97d5-cc2a32452852] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003777276s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.21s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-m64k5" [e8322794-ed92-4c0e-bfe7-935e548ab1fc] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003439983s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.389240021s)
--- PASS: TestAddons/parallel/Yakd (10.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.74s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.435442626s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.74s)

                                                
                                    
x
+
TestCertExpiration (227.92s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (13.744926386s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (32.399059912s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.775183899s)
--- PASS: TestCertExpiration (227.92s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19598-5822/.minikube/files/etc/test/nested/copy/12632/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (28.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (28.272596837s)
--- PASS: TestFunctional/serial/StartWithProxy (28.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (33.286626704s)
functional_test.go:663: soft start took 33.287476008s for "minikube" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.033376091s)
functional_test.go:761: restart took 37.03348607s for "minikube" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd3437698287/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.82s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (149.654568ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://10.138.0.48:32665 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (40.423099ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (40.557667ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/10 17:49:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 47794: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.63s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (74.830393ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5822/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5822/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:49:09.949488   48163 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:49:09.949596   48163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:49:09.949605   48163 out.go:358] Setting ErrFile to fd 2...
	I0910 17:49:09.949609   48163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:49:09.949761   48163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5822/.minikube/bin
	I0910 17:49:09.950288   48163 out.go:352] Setting JSON to false
	I0910 17:49:09.951259   48163 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1899,"bootTime":1725988651,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:49:09.951314   48163 start.go:139] virtualization: kvm guest
	I0910 17:49:09.953556   48163 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0910 17:49:09.954698   48163 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19598-5822/.minikube/cache/preloaded-tarball: no such file or directory
	I0910 17:49:09.954726   48163 notify.go:220] Checking for updates...
	I0910 17:49:09.954769   48163 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:49:09.956109   48163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:49:09.957454   48163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5822/kubeconfig
	I0910 17:49:09.958674   48163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5822/.minikube
	I0910 17:49:09.959801   48163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 17:49:09.960825   48163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:49:09.962213   48163 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:49:09.962483   48163 exec_runner.go:51] Run: systemctl --version
	I0910 17:49:09.964793   48163 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:49:09.974985   48163 out.go:177] * Using the none driver based on existing profile
	I0910 17:49:09.976214   48163 start.go:297] selected driver: none
	I0910 17:49:09.976226   48163 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:49:09.976345   48163 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:49:09.976367   48163 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0910 17:49:09.976630   48163 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0910 17:49:09.978837   48163 out.go:201] 
	W0910 17:49:09.979907   48163 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0910 17:49:09.980856   48163 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (80.057562ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5822/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5822/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 17:49:10.099287   48192 out.go:345] Setting OutFile to fd 1 ...
	I0910 17:49:10.099439   48192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:49:10.099448   48192 out.go:358] Setting ErrFile to fd 2...
	I0910 17:49:10.099454   48192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0910 17:49:10.099803   48192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19598-5822/.minikube/bin
	I0910 17:49:10.100476   48192 out.go:352] Setting JSON to false
	I0910 17:49:10.101720   48192 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1899,"bootTime":1725988651,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0910 17:49:10.101793   48192 start.go:139] virtualization: kvm guest
	I0910 17:49:10.103812   48192 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0910 17:49:10.105394   48192 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19598-5822/.minikube/cache/preloaded-tarball: no such file or directory
	I0910 17:49:10.105471   48192 notify.go:220] Checking for updates...
	I0910 17:49:10.105488   48192 out.go:177]   - MINIKUBE_LOCATION=19598
	I0910 17:49:10.106687   48192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 17:49:10.107999   48192 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19598-5822/kubeconfig
	I0910 17:49:10.109350   48192 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5822/.minikube
	I0910 17:49:10.110762   48192 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0910 17:49:10.112107   48192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 17:49:10.113817   48192 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0910 17:49:10.114191   48192 exec_runner.go:51] Run: systemctl --version
	I0910 17:49:10.116555   48192 driver.go:394] Setting default libvirt URI to qemu:///system
	I0910 17:49:10.126960   48192 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0910 17:49:10.128168   48192 start.go:297] selected driver: none
	I0910 17:49:10.128180   48192 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1725963390-19606@sha256:05c3fb4a3ac73e1a547cb186e5aec949a4a3d18af7d1444e0d1365c17dbedef9 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0910 17:49:10.128268   48192 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 17:49:10.128291   48192 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0910 17:49:10.128575   48192 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0910 17:49:10.130533   48192 out.go:201] 
	W0910 17:49:10.131615   48192 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0910 17:49:10.132782   48192 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "165.883033ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "42.161708ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "171.83937ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "40.831226ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-j89jn" [dd484565-3d5a-4306-ad39-648243c093ba] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-j89jn" [dd484565-3d5a-4306-ad39-648243c093ba] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003882376s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "324.162215ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.138.0.48:30724
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.138.0.48:30724
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4mqqn" [3a45bdfc-b964-415b-9f8d-6787a0cf913d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4mqqn" [3a45bdfc-b964-415b-9f8d-6787a0cf913d] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003803756s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.138.0.48:32729
functional_test.go:1675: http://10.138.0.48:32729: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-4mqqn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.138.0.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.138.0.48:32729
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d4e0dcad-0c18-451d-a3e0-363dbf5e5c4c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004099587s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6f8ea1cd-0e38-45ef-8ae9-908f60f5435d] Pending
helpers_test.go:344: "sp-pod" [6f8ea1cd-0e38-45ef-8ae9-908f60f5435d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6f8ea1cd-0e38-45ef-8ae9-908f60f5435d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003734881s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml: (1.156613163s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ade44412-91ea-47de-81d8-31b5f42162b8] Pending
helpers_test.go:344: "sp-pod" [ade44412-91ea-47de-81d8-31b5f42162b8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ade44412-91ea-47de-81d8-31b5f42162b8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003458168s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.82s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 49892: operation not permitted
helpers_test.go:508: unable to kill pid 49844: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context minikube apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c5110c89-cafe-4691-9437-7f5b8dc81606] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c5110c89-cafe-4691-9437-7f5b8dc81606] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003543131s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context minikube get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.216.27 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-stgjt" [a0514c0c-09cd-4c78-90b3-05782167423f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-stgjt" [a0514c0c-09cd-4c78-90b3-05782167423f] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.003841083s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-stgjt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-stgjt -- mysql -ppassword -e "show databases;": exit status 1 (109.123297ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-stgjt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-stgjt -- mysql -ppassword -e "show databases;": exit status 1 (108.721938ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-stgjt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.495037645s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (14.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.306809441s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (14.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (13.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.938128028s)
--- PASS: TestImageBuild/serial/Setup (13.94s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.646030056s)
--- PASS: TestImageBuild/serial/NormalBuild (1.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.55s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (30.27s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (30.272642676s)
--- PASS: TestJSONOutput/start/Command (30.27s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.39s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.39s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.45s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.449649693s)
--- PASS: TestJSONOutput/stop/Command (10.45s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.702708ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"35d617d6-b75d-45bb-88c0-ec750f722f6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8b1e6dc-59ae-45b1-8a18-138826f5891f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19598"}}
	{"specversion":"1.0","id":"9bc10f61-35ea-4df9-8528-3a4ee867b936","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"614c58ef-06f4-44fc-bd5f-30821b5dd297","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19598-5822/kubeconfig"}}
	{"specversion":"1.0","id":"28c53c82-9189-4b08-88ff-4d978c23a395","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5822/.minikube"}}
	{"specversion":"1.0","id":"99f496e4-7631-4ac2-b8e4-a3423513eb97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"74e6e4a1-f9d7-450c-8584-18446154fd34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3d915448-c871-4950-8cde-cc4f19a4f1aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (33.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.716520066s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.82385347s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.223260801s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (33.37s)

                                                
                                    
x
+
TestPause/serial/Start (28.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (28.865826595s)
--- PASS: TestPause/serial/Start (28.87s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (25.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (25.508776901s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (25.51s)

                                                
                                    
x
+
TestPause/serial/Pause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.49s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (128.768749ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.39s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.39s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.52s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.52s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.7s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.695342699s)
--- PASS: TestPause/serial/DeletePaused (1.70s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (72.07s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4006641352 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4006641352 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (30.707607047s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (37.02381576s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.002408651s)
--- PASS: TestRunningBinaryUpgrade (72.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (49.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1377047646 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1377047646 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.230756764s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1377047646 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1377047646 -p minikube stop: (23.624786137s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (11.930036772s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (49.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (304.99s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (29.543415825s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.309302684s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (69.53257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m15.353160235s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (63.528221ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19598-5822/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19598-5822/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (17.383925135s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.219527123s)
--- PASS: TestKubernetesUpgrade (304.99s)

                                                
                                    

Test skip (56/168)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.0/preload-exists 0
14 TestDownloadOnly/v1.31.0/cached-images 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
38 TestAddons/parallel/Olm 0
42 TestAddons/parallel/LocalPath 0
46 TestCertOptions 0
48 TestDockerFlags 0
49 TestForceSystemdFlag 0
50 TestForceSystemdEnv 0
51 TestDockerEnvContainerd 0
52 TestKVMDriverInstallOrUpdate 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
55 TestErrorSpam 0
64 TestFunctional/serial/CacheCmd 0
78 TestFunctional/parallel/MountCmd 0
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
102 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
103 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
105 TestFunctional/parallel/SSHCmd 0
106 TestFunctional/parallel/CpCmd 0
108 TestFunctional/parallel/FileSync 0
109 TestFunctional/parallel/CertSync 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/ImageCommands 0
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0
126 TestGvisorAddon 0
127 TestMultiControlPlane 0
135 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
162 TestKicCustomNetwork 0
163 TestKicExistingNetwork 0
164 TestKicCustomSubnet 0
165 TestKicStaticIP 0
168 TestMountStart 0
169 TestMultiNode 0
170 TestNetworkPlugins 0
171 TestNoKubernetes 0
172 TestChangeNoneUser 0
183 TestPreload 0
184 TestScheduledStopWindows 0
185 TestScheduledStopUnix 0
186 TestSkaffold 0
189 TestStartStop/group/old-k8s-version 0.12
190 TestStartStop/group/newest-cni 0.12
191 TestStartStop/group/default-k8s-diff-port 0.12
192 TestStartStop/group/no-preload 0.12
193 TestStartStop/group/disable-driver-mounts 0.12
194 TestStartStop/group/embed-certs 0.12
195 TestInsufficientStorage 0
202 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:198: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:978: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.12s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.12s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard