Test Report: none_Linux 19576

                    
                      2e9b50ac88536491e648f1503809a6b59d99d481:2024-09-06:36104
                    
                

Test fail (2/167)

Order failed test Duration
33 TestAddons/parallel/Registry 71.8
120 TestFunctional/parallel/License 0.56
x
+
TestAddons/parallel/Registry (71.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.777295ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-v2hm7" [e6f429ae-9168-48c6-8e02-968ce47780ae] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003578182s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lp46v" [6d764de2-9241-4f56-9564-ae56133efa57] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00358376s
addons_test.go:342: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.079565408s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/06 18:43:04 [DEBUG] GET http://10.154.0.4:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:41257               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:30 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 06 Sep 24 18:30 UTC | 06 Sep 24 18:30 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 06 Sep 24 18:30 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 06 Sep 24 18:30 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 06 Sep 24 18:30 UTC | 06 Sep 24 18:33 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 06 Sep 24 18:33 UTC | 06 Sep 24 18:33 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 06 Sep 24 18:43 UTC | 06 Sep 24 18:43 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 06 Sep 24 18:43 UTC | 06 Sep 24 18:43 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:30:16
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:30:16.432171   16537 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:30:16.432394   16537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:30:16.432403   16537 out.go:358] Setting ErrFile to fd 2...
	I0906 18:30:16.432408   16537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:30:16.432615   16537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-5859/.minikube/bin
	I0906 18:30:16.433214   16537 out.go:352] Setting JSON to false
	I0906 18:30:16.434103   16537 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":758,"bootTime":1725646658,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:30:16.434161   16537 start.go:139] virtualization: kvm guest
	I0906 18:30:16.436247   16537 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0906 18:30:16.437386   16537 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19576-5859/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 18:30:16.437429   16537 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:30:16.437433   16537 notify.go:220] Checking for updates...
	I0906 18:30:16.438952   16537 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:30:16.440252   16537 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-5859/kubeconfig
	I0906 18:30:16.441596   16537 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-5859/.minikube
	I0906 18:30:16.442748   16537 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 18:30:16.443916   16537 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:30:16.445141   16537 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:30:16.454936   16537 out.go:177] * Using the none driver based on user configuration
	I0906 18:30:16.456069   16537 start.go:297] selected driver: none
	I0906 18:30:16.456089   16537 start.go:901] validating driver "none" against <nil>
	I0906 18:30:16.456100   16537 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:30:16.456128   16537 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0906 18:30:16.456397   16537 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0906 18:30:16.456931   16537 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 18:30:16.457143   16537 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:30:16.457199   16537 cni.go:84] Creating CNI manager for ""
	I0906 18:30:16.457218   16537 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 18:30:16.457226   16537 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 18:30:16.457272   16537 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:30:16.458867   16537 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0906 18:30:16.460413   16537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/config.json ...
	I0906 18:30:16.460444   16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/config.json: {Name:mkdd137618f698f37b9d6029ffe3cabfeea10cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:16.460577   16537 start.go:360] acquireMachinesLock for minikube: {Name:mk32cf99842cb1a787bdad14db608c92d2701216 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 18:30:16.460614   16537 start.go:364] duration metric: took 19.25µs to acquireMachinesLock for "minikube"
	I0906 18:30:16.460632   16537 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 18:30:16.460716   16537 start.go:125] createHost starting for "" (driver="none")
	I0906 18:30:16.462209   16537 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0906 18:30:16.463460   16537 exec_runner.go:51] Run: systemctl --version
	I0906 18:30:16.466004   16537 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0906 18:30:16.466050   16537 client.go:168] LocalClient.Create starting
	I0906 18:30:16.466147   16537 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-5859/.minikube/certs/ca.pem
	I0906 18:30:16.466187   16537 main.go:141] libmachine: Decoding PEM data...
	I0906 18:30:16.466213   16537 main.go:141] libmachine: Parsing certificate...
	I0906 18:30:16.466269   16537 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-5859/.minikube/certs/cert.pem
	I0906 18:30:16.466293   16537 main.go:141] libmachine: Decoding PEM data...
	I0906 18:30:16.466309   16537 main.go:141] libmachine: Parsing certificate...
	I0906 18:30:16.466667   16537 client.go:171] duration metric: took 606.29µs to LocalClient.Create
	I0906 18:30:16.466690   16537 start.go:167] duration metric: took 688.493µs to libmachine.API.Create "minikube"
	I0906 18:30:16.466696   16537 start.go:293] postStartSetup for "minikube" (driver="none")
	I0906 18:30:16.466738   16537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 18:30:16.466794   16537 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 18:30:16.475722   16537 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 18:30:16.475757   16537 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 18:30:16.475775   16537 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 18:30:16.477856   16537 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0906 18:30:16.479126   16537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-5859/.minikube/addons for local assets ...
	I0906 18:30:16.479195   16537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-5859/.minikube/files for local assets ...
	I0906 18:30:16.479226   16537 start.go:296] duration metric: took 12.524095ms for postStartSetup
	I0906 18:30:16.479820   16537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/config.json ...
	I0906 18:30:16.479964   16537 start.go:128] duration metric: took 19.238821ms to createHost
	I0906 18:30:16.479977   16537 start.go:83] releasing machines lock for "minikube", held for 19.351879ms
	I0906 18:30:16.480315   16537 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 18:30:16.480394   16537 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0906 18:30:16.482200   16537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 18:30:16.482250   16537 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 18:30:16.491093   16537 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0906 18:30:16.491116   16537 start.go:495] detecting cgroup driver to use...
	I0906 18:30:16.491145   16537 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0906 18:30:16.491309   16537 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:30:16.510925   16537 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 18:30:16.520059   16537 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 18:30:16.529348   16537 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 18:30:16.529403   16537 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 18:30:16.538116   16537 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 18:30:16.547347   16537 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 18:30:16.556188   16537 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 18:30:16.567992   16537 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 18:30:16.576909   16537 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 18:30:16.585687   16537 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 18:30:16.594479   16537 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 18:30:16.603324   16537 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 18:30:16.611285   16537 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 18:30:16.618460   16537 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0906 18:30:16.847673   16537 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0906 18:30:16.973791   16537 start.go:495] detecting cgroup driver to use...
	I0906 18:30:16.973847   16537 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0906 18:30:16.973956   16537 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:30:16.993887   16537 exec_runner.go:51] Run: which cri-dockerd
	I0906 18:30:16.995134   16537 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 18:30:17.003608   16537 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0906 18:30:17.003646   16537 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0906 18:30:17.003696   16537 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0906 18:30:17.012186   16537 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 18:30:17.012343   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2291484935 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0906 18:30:17.021886   16537 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0906 18:30:17.256569   16537 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0906 18:30:17.476575   16537 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 18:30:17.476726   16537 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0906 18:30:17.476741   16537 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0906 18:30:17.476785   16537 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0906 18:30:17.485966   16537 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0906 18:30:17.486113   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1332208072 /etc/docker/daemon.json
	I0906 18:30:17.494142   16537 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0906 18:30:17.722095   16537 exec_runner.go:51] Run: sudo systemctl restart docker
	I0906 18:30:18.102750   16537 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 18:30:18.113702   16537 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0906 18:30:18.128838   16537 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 18:30:18.140312   16537 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0906 18:30:18.345976   16537 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0906 18:30:18.572578   16537 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0906 18:30:18.794135   16537 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0906 18:30:18.808429   16537 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 18:30:18.819316   16537 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0906 18:30:19.041063   16537 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0906 18:30:19.109832   16537 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 18:30:19.109903   16537 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0906 18:30:19.111678   16537 start.go:563] Will wait 60s for crictl version
	I0906 18:30:19.111723   16537 exec_runner.go:51] Run: which crictl
	I0906 18:30:19.112540   16537 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0906 18:30:19.142353   16537 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 18:30:19.142410   16537 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0906 18:30:19.165488   16537 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0906 18:30:19.189803   16537 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 18:30:19.189878   16537 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0906 18:30:19.192954   16537 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0906 18:30:19.194423   16537 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 18:30:19.194550   16537 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 18:30:19.194572   16537 kubeadm.go:934] updating node { 10.154.0.4 8443 v1.31.0 docker true true} ...
	I0906 18:30:19.194673   16537 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-9 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.154.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0906 18:30:19.194723   16537 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0906 18:30:19.239708   16537 cni.go:84] Creating CNI manager for ""
	I0906 18:30:19.239740   16537 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 18:30:19.239775   16537 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 18:30:19.239804   16537 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.154.0.4 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-9 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.154.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.154.0.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 18:30:19.239987   16537 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.154.0.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-9"
	  kubeletExtraArgs:
	    node-ip: 10.154.0.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.154.0.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 18:30:19.240071   16537 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 18:30:19.248453   16537 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0906 18:30:19.248509   16537 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0906 18:30:19.256879   16537 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0906 18:30:19.256889   16537 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0906 18:30:19.256889   16537 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0906 18:30:19.256933   16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0906 18:30:19.256934   16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0906 18:30:19.256941   16537 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:30:19.269236   16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0906 18:30:19.305762   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3584136102 /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0906 18:30:19.320789   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1871334178 /var/lib/minikube/binaries/v1.31.0/kubectl
	I0906 18:30:19.349277   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube899051023 /var/lib/minikube/binaries/v1.31.0/kubelet
	I0906 18:30:19.413144   16537 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 18:30:19.421525   16537 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0906 18:30:19.421549   16537 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0906 18:30:19.421583   16537 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0906 18:30:19.429379   16537 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0906 18:30:19.429515   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4233444896 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0906 18:30:19.438680   16537 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0906 18:30:19.438703   16537 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0906 18:30:19.438735   16537 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0906 18:30:19.446468   16537 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 18:30:19.446596   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1209045392 /lib/systemd/system/kubelet.service
	I0906 18:30:19.454832   16537 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0906 18:30:19.454951   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2862409953 /var/tmp/minikube/kubeadm.yaml.new
	I0906 18:30:19.465359   16537 exec_runner.go:51] Run: grep 10.154.0.4	control-plane.minikube.internal$ /etc/hosts
	I0906 18:30:19.466601   16537 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0906 18:30:19.684681   16537 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0906 18:30:19.699588   16537 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube for IP: 10.154.0.4
	I0906 18:30:19.699610   16537 certs.go:194] generating shared ca certs ...
	I0906 18:30:19.699632   16537 certs.go:226] acquiring lock for ca certs: {Name:mk556d199463ef19f85b68a44414038534ced562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:19.699774   16537 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-5859/.minikube/ca.key
	I0906 18:30:19.699817   16537 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-5859/.minikube/proxy-client-ca.key
	I0906 18:30:19.699827   16537 certs.go:256] generating profile certs ...
	I0906 18:30:19.699878   16537 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/client.key
	I0906 18:30:19.699891   16537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/client.crt with IP's: []
	I0906 18:30:19.797572   16537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/client.crt ...
	I0906 18:30:19.797601   16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/client.crt: {Name:mkbea853624ff1b416b05c7051e56df9f1e6e48a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:19.797739   16537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/client.key ...
	I0906 18:30:19.797752   16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/client.key: {Name:mk25f9732ca0079107ee76d2f71d190e1415951e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:19.797822   16537 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.key.1b9420d6
	I0906 18:30:19.797837   16537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.crt.1b9420d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.154.0.4]
	I0906 18:30:19.862150   16537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.crt.1b9420d6 ...
	I0906 18:30:19.862181   16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.crt.1b9420d6: {Name:mkdcf29800e713f1ce977e508e4cdbc700b2cb92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:19.862317   16537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.key.1b9420d6 ...
	I0906 18:30:19.862327   16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.key.1b9420d6: {Name:mk735b881f868df870d87d4161b62ab9b9a083d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:19.862378   16537 certs.go:381] copying /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.crt.1b9420d6 -> /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.crt
	I0906 18:30:19.862446   16537 certs.go:385] copying /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.key.1b9420d6 -> /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.key
	I0906 18:30:19.862498   16537 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.key
	I0906 18:30:19.862511   16537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0906 18:30:20.005278   16537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.crt ...
	I0906 18:30:20.005308   16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.crt: {Name:mke67782129f4169dc8d462ad2039289f538d003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:20.005461   16537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.key ...
	I0906 18:30:20.005472   16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.key: {Name:mk3b0da6b806d9ce972a7bd1c9549e30eb90e836 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:20.005627   16537 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-5859/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 18:30:20.005660   16537 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-5859/.minikube/certs/ca.pem (1082 bytes)
	I0906 18:30:20.005683   16537 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-5859/.minikube/certs/cert.pem (1123 bytes)
	I0906 18:30:20.005703   16537 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-5859/.minikube/certs/key.pem (1679 bytes)
	I0906 18:30:20.006270   16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 18:30:20.006379   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2080329162 /var/lib/minikube/certs/ca.crt
	I0906 18:30:20.015530   16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 18:30:20.015709   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1609637301 /var/lib/minikube/certs/ca.key
	I0906 18:30:20.024112   16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 18:30:20.024233   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3717130230 /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 18:30:20.033184   16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 18:30:20.033318   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3095673967 /var/lib/minikube/certs/proxy-client-ca.key
	I0906 18:30:20.041781   16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0906 18:30:20.041890   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube283663809 /var/lib/minikube/certs/apiserver.crt
	I0906 18:30:20.049480   16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 18:30:20.049606   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2699080327 /var/lib/minikube/certs/apiserver.key
	I0906 18:30:20.057407   16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 18:30:20.057513   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2065321659 /var/lib/minikube/certs/proxy-client.crt
	I0906 18:30:20.065215   16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 18:30:20.065329   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1342280924 /var/lib/minikube/certs/proxy-client.key
	I0906 18:30:20.073367   16537 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0906 18:30:20.073385   16537 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:20.073416   16537 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:20.081598   16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 18:30:20.081729   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube562049854 /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:20.089336   16537 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 18:30:20.089439   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1205555838 /var/lib/minikube/kubeconfig
	I0906 18:30:20.097399   16537 exec_runner.go:51] Run: openssl version
	I0906 18:30:20.100224   16537 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 18:30:20.109385   16537 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:20.110803   16537 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:20.110857   16537 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:20.113716   16537 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 18:30:20.121732   16537 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 18:30:20.122913   16537 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 18:30:20.123025   16537 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:30:20.123141   16537 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 18:30:20.140016   16537 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 18:30:20.148566   16537 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 18:30:20.156784   16537 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0906 18:30:20.178343   16537 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 18:30:20.187031   16537 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 18:30:20.187049   16537 kubeadm.go:157] found existing configuration files:
	
	I0906 18:30:20.187091   16537 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 18:30:20.195222   16537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 18:30:20.195276   16537 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 18:30:20.203103   16537 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 18:30:20.210696   16537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 18:30:20.210737   16537 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 18:30:20.217719   16537 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 18:30:20.226032   16537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 18:30:20.226093   16537 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 18:30:20.234617   16537 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 18:30:20.242464   16537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 18:30:20.242521   16537 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 18:30:20.250317   16537 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 18:30:20.283837   16537 kubeadm.go:310] W0906 18:30:20.283731   17412 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:30:20.284372   16537 kubeadm.go:310] W0906 18:30:20.284328   17412 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:30:20.286218   16537 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 18:30:20.286242   16537 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 18:30:20.383845   16537 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 18:30:20.383982   16537 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 18:30:20.383995   16537 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 18:30:20.384001   16537 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 18:30:20.396335   16537 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 18:30:20.400376   16537 out.go:235]   - Generating certificates and keys ...
	I0906 18:30:20.400441   16537 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 18:30:20.400462   16537 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 18:30:20.748846   16537 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 18:30:20.882128   16537 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 18:30:21.048605   16537 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 18:30:21.137813   16537 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 18:30:21.252667   16537 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 18:30:21.252751   16537 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
	I0906 18:30:21.304998   16537 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 18:30:21.305058   16537 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
	I0906 18:30:21.583104   16537 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 18:30:21.674513   16537 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 18:30:21.758487   16537 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 18:30:21.758656   16537 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 18:30:21.836872   16537 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 18:30:22.122091   16537 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 18:30:22.195165   16537 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 18:30:22.471931   16537 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 18:30:22.603256   16537 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 18:30:22.603841   16537 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 18:30:22.606236   16537 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 18:30:22.608589   16537 out.go:235]   - Booting up control plane ...
	I0906 18:30:22.608630   16537 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 18:30:22.608657   16537 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 18:30:22.610022   16537 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 18:30:22.631175   16537 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 18:30:22.635936   16537 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 18:30:22.635980   16537 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 18:30:22.890998   16537 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 18:30:22.891024   16537 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 18:30:23.392800   16537 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.742085ms
	I0906 18:30:23.392822   16537 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 18:30:27.894298   16537 kubeadm.go:310] [api-check] The API server is healthy after 4.501453314s
	I0906 18:30:27.904807   16537 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 18:30:27.915389   16537 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 18:30:27.932367   16537 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 18:30:27.932387   16537 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-9 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 18:30:27.939160   16537 kubeadm.go:310] [bootstrap-token] Using token: b9dczz.lnjn4ynjbdzm4imt
	I0906 18:30:27.940616   16537 out.go:235]   - Configuring RBAC rules ...
	I0906 18:30:27.940648   16537 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 18:30:27.946018   16537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 18:30:27.952061   16537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 18:30:27.954267   16537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 18:30:27.957389   16537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 18:30:27.959683   16537 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 18:30:28.300445   16537 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 18:30:28.719481   16537 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 18:30:29.299366   16537 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 18:30:29.300174   16537 kubeadm.go:310] 
	I0906 18:30:29.300189   16537 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 18:30:29.300194   16537 kubeadm.go:310] 
	I0906 18:30:29.300198   16537 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 18:30:29.300202   16537 kubeadm.go:310] 
	I0906 18:30:29.300205   16537 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 18:30:29.300210   16537 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 18:30:29.300213   16537 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 18:30:29.300217   16537 kubeadm.go:310] 
	I0906 18:30:29.300220   16537 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 18:30:29.300224   16537 kubeadm.go:310] 
	I0906 18:30:29.300227   16537 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 18:30:29.300230   16537 kubeadm.go:310] 
	I0906 18:30:29.300234   16537 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 18:30:29.300238   16537 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 18:30:29.300242   16537 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 18:30:29.300245   16537 kubeadm.go:310] 
	I0906 18:30:29.300250   16537 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 18:30:29.300254   16537 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 18:30:29.300258   16537 kubeadm.go:310] 
	I0906 18:30:29.300274   16537 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token b9dczz.lnjn4ynjbdzm4imt \
	I0906 18:30:29.300280   16537 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:556ca23e9a7c272d5a050de938211552af48eef0910c4191e38aa373095b5858 \
	I0906 18:30:29.300284   16537 kubeadm.go:310] 	--control-plane 
	I0906 18:30:29.300288   16537 kubeadm.go:310] 
	I0906 18:30:29.300292   16537 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 18:30:29.300296   16537 kubeadm.go:310] 
	I0906 18:30:29.300300   16537 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b9dczz.lnjn4ynjbdzm4imt \
	I0906 18:30:29.300304   16537 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:556ca23e9a7c272d5a050de938211552af48eef0910c4191e38aa373095b5858 
	I0906 18:30:29.303239   16537 cni.go:84] Creating CNI manager for ""
	I0906 18:30:29.303269   16537 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 18:30:29.305166   16537 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 18:30:29.306356   16537 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0906 18:30:29.317565   16537 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 18:30:29.317745   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1574200271 /etc/cni/net.d/1-k8s.conflist
	I0906 18:30:29.327842   16537 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 18:30:29.327902   16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:29.327955   16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-9 minikube.k8s.io/updated_at=2024_09_06T18_30_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0906 18:30:29.336433   16537 ops.go:34] apiserver oom_adj: -16
	I0906 18:30:29.399003   16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:29.899646   16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:30.399663   16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:30.899995   16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:31.399976   16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:31.899941   16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:32.400007   16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:32.899483   16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:33.399432   16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:33.900003   16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:33.970200   16537 kubeadm.go:1113] duration metric: took 4.642345841s to wait for elevateKubeSystemPrivileges
	I0906 18:30:33.970237   16537 kubeadm.go:394] duration metric: took 13.847284956s to StartCluster
	I0906 18:30:33.970259   16537 settings.go:142] acquiring lock: {Name:mk11025cc1d7eecccc04d83d4495919089f6c151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:33.970324   16537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-5859/kubeconfig
	I0906 18:30:33.970963   16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/kubeconfig: {Name:mke818bbf85a8934c960f593fcb655d7c5e2890c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:33.971190   16537 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 18:30:33.971289   16537 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0906 18:30:33.971380   16537 addons.go:69] Setting yakd=true in profile "minikube"
	I0906 18:30:33.971397   16537 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0906 18:30:33.971409   16537 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0906 18:30:33.971432   16537 addons.go:69] Setting registry=true in profile "minikube"
	I0906 18:30:33.971434   16537 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0906 18:30:33.971452   16537 addons.go:234] Setting addon registry=true in "minikube"
	I0906 18:30:33.971452   16537 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0906 18:30:33.971465   16537 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0906 18:30:33.971473   16537 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0906 18:30:33.971480   16537 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:30:33.971488   16537 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0906 18:30:33.971490   16537 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0906 18:30:33.971497   16537 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0906 18:30:33.971508   16537 host.go:66] Checking if "minikube" exists ...
	I0906 18:30:33.971517   16537 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0906 18:30:33.971520   16537 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0906 18:30:33.971524   16537 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0906 18:30:33.971538   16537 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0906 18:30:33.971542   16537 host.go:66] Checking if "minikube" exists ...
	I0906 18:30:33.971554   16537 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0906 18:30:33.971569   16537 host.go:66] Checking if "minikube" exists ...
	I0906 18:30:33.971608   16537 host.go:66] Checking if "minikube" exists ...
	I0906 18:30:33.971423   16537 addons.go:234] Setting addon yakd=true in "minikube"
	I0906 18:30:33.972145   16537 host.go:66] Checking if "minikube" exists ...
	I0906 18:30:33.971498   16537 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0906 18:30:33.972237   16537 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0906 18:30:33.972268   16537 host.go:66] Checking if "minikube" exists ...
	I0906 18:30:33.972824   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:33.972827   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:33.972843   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:33.972843   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:33.972849   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:33.972865   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:33.972879   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:33.971452   16537 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0906 18:30:33.972904   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:33.972982   16537 host.go:66] Checking if "minikube" exists ...
	I0906 18:30:33.973084   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:33.973098   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:33.973105   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:33.973119   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:33.973130   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:33.973146   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:33.971388   16537 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0906 18:30:33.973640   16537 out.go:177] * Configuring local host environment ...
	I0906 18:30:33.973904   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:33.973919   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:33.973960   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:33.971482   16537 host.go:66] Checking if "minikube" exists ...
	I0906 18:30:33.974654   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:33.974675   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:33.974723   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:33.971485   16537 host.go:66] Checking if "minikube" exists ...
	I0906 18:30:33.975166   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:33.975195   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:33.975228   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 18:30:33.975290   16537 out.go:270] * 
	W0906 18:30:33.975306   16537 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0906 18:30:33.975318   16537 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0906 18:30:33.975331   16537 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0906 18:30:33.975339   16537 out.go:270] * 
	W0906 18:30:33.975402   16537 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0906 18:30:33.975411   16537 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	I0906 18:30:33.973647   16537 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0906 18:30:33.971506   16537 addons.go:69] Setting gcp-auth=true in profile "minikube"
	W0906 18:30:33.975795   16537 out.go:270] * 
	I0906 18:30:33.972879   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:33.971508   16537 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0906 18:30:33.971486   16537 addons.go:69] Setting volcano=true in profile "minikube"
	W0906 18:30:33.976585   16537 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0906 18:30:33.976600   16537 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	I0906 18:30:33.991096   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:33.996743   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:34.001773   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.001889   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:34.003288   16537 mustload.go:65] Loading cluster: minikube
	I0906 18:30:34.003341   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:34.003562   16537 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:30:34.004141   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:34.004157   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:34.004190   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:34.004391   16537 addons.go:234] Setting addon volcano=true in "minikube"
	I0906 18:30:34.004456   16537 host.go:66] Checking if "minikube" exists ...
	I0906 18:30:34.005135   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:34.005150   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:34.005184   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:34.005504   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:34.005684   16537 host.go:66] Checking if "minikube" exists ...
	I0906 18:30:34.006368   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:34.006381   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:34.006406   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:34.006984   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:34.007004   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:34.007033   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:34.010419   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.010471   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	W0906 18:30:34.015570   16537 out.go:270] * 
	W0906 18:30:34.015607   16537 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0906 18:30:34.015651   16537 start.go:235] Will wait 6m0s for node &{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 18:30:34.016822   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:34.016859   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:34.016897   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:34.017262   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:34.018115   16537 out.go:177] * Verifying Kubernetes components...
	I0906 18:30:34.018267   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.018291   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.018991   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:34.019007   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:34.019040   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:34.019345   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:34.020062   16537 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0906 18:30:34.024587   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.025798   16537 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0906 18:30:34.026106   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:34.026894   16537 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 18:30:34.026932   16537 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 18:30:34.027071   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2084483678 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 18:30:34.028561   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.028606   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.031357   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.031378   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:34.031405   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:34.033623   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.034959   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.035027   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:34.041232   16537 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0906 18:30:34.042890   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:34.044806   16537 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 18:30:34.044865   16537 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 18:30:34.044895   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0906 18:30:34.044898   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0906 18:30:34.045013   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2243718550 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 18:30:34.045106   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2332487414 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 18:30:34.048752   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.048796   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:34.048809   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:34.053140   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.053204   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:34.054215   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:34.055696   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:34.058117   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.058141   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.063569   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.065090   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.065115   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.065533   16537 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0906 18:30:34.067049   16537 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0906 18:30:34.067088   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0906 18:30:34.067235   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3876569157 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0906 18:30:34.070206   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.071314   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.071535   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:34.072067   16537 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0906 18:30:34.073425   16537 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0906 18:30:34.074427   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.074449   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.076177   16537 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0906 18:30:34.076328   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1849676009 /etc/kubernetes/addons/yakd-ns.yaml
	I0906 18:30:34.076969   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.077023   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:34.080331   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.080658   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:34.080664   16537 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 18:30:34.080713   16537 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 18:30:34.080825   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube154303027 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 18:30:34.080876   16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 18:30:34.082843   16537 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0906 18:30:34.084061   16537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 18:30:34.084087   16537 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0906 18:30:34.084092   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.084138   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:34.084245   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1049986199 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 18:30:34.086659   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.086701   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:34.090381   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.090435   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.092523   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:34.093613   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.093662   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:34.096440   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.096821   16537 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0906 18:30:34.096855   16537 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0906 18:30:34.096999   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2006494449 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0906 18:30:34.098538   16537 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0906 18:30:34.099965   16537 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0906 18:30:34.099998   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0906 18:30:34.100127   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube695944334 /etc/kubernetes/addons/deployment.yaml
	I0906 18:30:34.104933   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.104978   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:34.107507   16537 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 18:30:34.109388   16537 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 18:30:34.109415   16537 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 18:30:34.109540   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3059371932 /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 18:30:34.109938   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.109959   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.111528   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.111551   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.114904   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.115890   16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0906 18:30:34.116337   16537 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0906 18:30:34.116371   16537 host.go:66] Checking if "minikube" exists ...
	I0906 18:30:34.116866   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:34.116883   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:34.116909   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:34.117113   16537 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0906 18:30:34.117139   16537 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0906 18:30:34.117280   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1917499766 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0906 18:30:34.117484   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.117498   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.117612   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.117633   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.118565   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.120281   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.120470   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:34.120536   16537 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0906 18:30:34.120574   16537 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0906 18:30:34.120704   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3675847591 /etc/kubernetes/addons/yakd-sa.yaml
	I0906 18:30:34.120996   16537 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 18:30:34.121923   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.122034   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:34.122505   16537 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:34.122524   16537 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0906 18:30:34.122532   16537 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:34.122567   16537 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:34.124535   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.124687   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.124701   16537 host.go:66] Checking if "minikube" exists ...
	I0906 18:30:34.126351   16537 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0906 18:30:34.127335   16537 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0906 18:30:34.127370   16537 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0906 18:30:34.127487   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4117090496 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0906 18:30:34.129761   16537 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0906 18:30:34.131163   16537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0906 18:30:34.132145   16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 18:30:34.133851   16537 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0906 18:30:34.133873   16537 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0906 18:30:34.133960   16537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0906 18:30:34.133986   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2090160348 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0906 18:30:34.136921   16537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0906 18:30:34.138165   16537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0906 18:30:34.139949   16537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0906 18:30:34.141511   16537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0906 18:30:34.142798   16537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0906 18:30:34.142851   16537 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0906 18:30:34.143125   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube750790998 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0906 18:30:34.148310   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.148355   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.153263   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.155043   16537 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0906 18:30:34.156631   16537 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0906 18:30:34.156674   16537 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0906 18:30:34.156938   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube771928026 /etc/kubernetes/addons/ig-namespace.yaml
	I0906 18:30:34.161031   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.161051   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.161368   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.161390   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.162530   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 18:30:34.162661   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2304560589 /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:34.166193   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.168192   16537 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0906 18:30:34.168235   16537 host.go:66] Checking if "minikube" exists ...
	I0906 18:30:34.168494   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.168946   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:34.168964   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:34.169002   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:34.170819   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.170841   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.171594   16537 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0906 18:30:34.174863   16537 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0906 18:30:34.176305   16537 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0906 18:30:34.176336   16537 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0906 18:30:34.176817   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3049052891 /etc/kubernetes/addons/yakd-crb.yaml
	I0906 18:30:34.177852   16537 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0906 18:30:34.178736   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.179666   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:34.181290   16537 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0906 18:30:34.181574   16537 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0906 18:30:34.181599   16537 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0906 18:30:34.181800   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1375878742 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0906 18:30:34.182460   16537 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0906 18:30:34.182502   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0906 18:30:34.183260   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube243895838 /etc/kubernetes/addons/volcano-deployment.yaml
	I0906 18:30:34.184409   16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:34.186420   16537 out.go:177]   - Using image docker.io/registry:2.8.3
	I0906 18:30:34.187412   16537 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 18:30:34.187439   16537 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0906 18:30:34.187563   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1156456590 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 18:30:34.187930   16537 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0906 18:30:34.187957   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0906 18:30:34.188079   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2844190885 /etc/kubernetes/addons/registry-rc.yaml
	I0906 18:30:34.202427   16537 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0906 18:30:34.202470   16537 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0906 18:30:34.202743   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2716063426 /etc/kubernetes/addons/yakd-svc.yaml
	I0906 18:30:34.204633   16537 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0906 18:30:34.204668   16537 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0906 18:30:34.204791   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube572377492 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0906 18:30:34.210421   16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0906 18:30:34.214209   16537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0906 18:30:34.214266   16537 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0906 18:30:34.214476   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3837124974 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0906 18:30:34.229312   16537 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0906 18:30:34.229347   16537 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0906 18:30:34.229467   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1617842210 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0906 18:30:34.238269   16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 18:30:34.239584   16537 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0906 18:30:34.239613   16537 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0906 18:30:34.239728   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube947277937 /etc/kubernetes/addons/registry-svc.yaml
	I0906 18:30:34.240210   16537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0906 18:30:34.240239   16537 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0906 18:30:34.240344   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1650829709 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0906 18:30:34.244928   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.245072   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:34.258680   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.258713   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.261269   16537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0906 18:30:34.261300   16537 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0906 18:30:34.261419   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2238946092 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0906 18:30:34.264067   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.264116   16537 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:34.264129   16537 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0906 18:30:34.264141   16537 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:34.264182   16537 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:34.264487   16537 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0906 18:30:34.264512   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0906 18:30:34.264626   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube183100232 /etc/kubernetes/addons/yakd-dp.yaml
	I0906 18:30:34.280914   16537 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0906 18:30:34.280949   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0906 18:30:34.281080   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3416468484 /etc/kubernetes/addons/registry-proxy.yaml
	I0906 18:30:34.282449   16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0906 18:30:34.283747   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:34.284235   16537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0906 18:30:34.284264   16537 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0906 18:30:34.284391   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube723907704 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0906 18:30:34.292370   16537 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:34.292403   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0906 18:30:34.292531   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3737032807 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:34.302679   16537 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 18:30:34.302866   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1539627577 /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:34.311269   16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:34.311638   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:34.311721   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:34.316939   16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0906 18:30:34.329889   16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:34.338999   16537 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0906 18:30:34.339035   16537 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0906 18:30:34.339165   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2197815068 /etc/kubernetes/addons/ig-role.yaml
	I0906 18:30:34.364285   16537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0906 18:30:34.364321   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0906 18:30:34.364464   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube867340576 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0906 18:30:34.375484   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:34.375513   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:34.387529   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:34.391091   16537 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0906 18:30:34.392709   16537 out.go:177]   - Using image docker.io/busybox:stable
	I0906 18:30:34.394657   16537 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 18:30:34.394745   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0906 18:30:34.394915   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2454301497 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 18:30:34.414882   16537 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0906 18:30:34.414924   16537 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0906 18:30:34.415046   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1184914514 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0906 18:30:34.443978   16537 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0906 18:30:34.444018   16537 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0906 18:30:34.444160   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4226892386 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0906 18:30:34.476443   16537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0906 18:30:34.476482   16537 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0906 18:30:34.476604   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2960343500 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0906 18:30:34.493627   16537 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0906 18:30:34.499251   16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 18:30:34.566905   16537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0906 18:30:34.566948   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0906 18:30:34.567436   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1995683430 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0906 18:30:34.575793   16537 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-9" to be "Ready" ...
	I0906 18:30:34.579172   16537 node_ready.go:49] node "ubuntu-20-agent-9" has status "Ready":"True"
	I0906 18:30:34.579193   16537 node_ready.go:38] duration metric: took 3.370369ms for node "ubuntu-20-agent-9" to be "Ready" ...
	I0906 18:30:34.579205   16537 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:30:34.596160   16537 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mmqcm" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:34.637283   16537 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0906 18:30:34.637320   16537 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0906 18:30:34.637463   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1131400545 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0906 18:30:34.711621   16537 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0906 18:30:34.726204   16537 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0906 18:30:34.726250   16537 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0906 18:30:34.727197   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2396398083 /etc/kubernetes/addons/ig-crd.yaml
	I0906 18:30:34.833711   16537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0906 18:30:34.833755   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0906 18:30:34.833925   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1621882382 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0906 18:30:34.907141   16537 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 18:30:34.907181   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0906 18:30:34.907385   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4073258762 /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 18:30:34.946319   16537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 18:30:34.946362   16537 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0906 18:30:34.946515   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2718334830 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 18:30:34.989043   16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 18:30:34.999900   16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 18:30:35.226872   16537 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0906 18:30:35.261637   16537 addons.go:475] Verifying addon registry=true in "minikube"
	I0906 18:30:35.266072   16537 out.go:177] * Verifying registry addon...
	I0906 18:30:35.277165   16537 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0906 18:30:35.279185   16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.040871516s)
	I0906 18:30:35.281905   16537 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0906 18:30:35.281934   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:35.299008   16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.166818137s)
	I0906 18:30:35.299044   16537 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0906 18:30:35.363437   16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.178980842s)
	I0906 18:30:35.580204   16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.080913754s)
	I0906 18:30:35.748641   16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.466145892s)
	I0906 18:30:35.757870   16537 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0906 18:30:35.781417   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:36.210568   16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.89913039s)
	W0906 18:30:36.210615   16537 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 18:30:36.210653   16537 retry.go:31] will retry after 191.026519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 18:30:36.285705   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:36.401892   16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:36.608601   16537 pod_ready.go:103] pod "coredns-6f6b679f8f-mmqcm" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:36.781517   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:37.284671   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:37.418270   16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.207813152s)
	I0906 18:30:37.623011   16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.623046717s)
	I0906 18:30:37.623050   16537 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0906 18:30:37.624618   16537 out.go:177] * Verifying csi-hostpath-driver addon...
	I0906 18:30:37.627325   16537 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0906 18:30:37.633668   16537 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0906 18:30:37.633696   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:37.781471   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:38.133361   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:38.281490   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:38.631347   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:38.781233   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:39.101212   16537 pod_ready.go:103] pod "coredns-6f6b679f8f-mmqcm" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:39.132352   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:39.198479   16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.796527775s)
	I0906 18:30:39.280505   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:39.632573   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:39.781982   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:40.131808   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:40.281000   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:40.632869   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:40.781411   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:41.102458   16537 pod_ready.go:103] pod "coredns-6f6b679f8f-mmqcm" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:41.131674   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:41.132891   16537 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0906 18:30:41.133039   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4117297751 /var/lib/minikube/google_application_credentials.json
	I0906 18:30:41.145610   16537 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0906 18:30:41.145750   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3605993201 /var/lib/minikube/google_cloud_project
	I0906 18:30:41.159210   16537 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0906 18:30:41.159266   16537 host.go:66] Checking if "minikube" exists ...
	I0906 18:30:41.159965   16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0906 18:30:41.159987   16537 api_server.go:166] Checking apiserver status ...
	I0906 18:30:41.160026   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:41.180201   16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
	I0906 18:30:41.196342   16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
	I0906 18:30:41.196420   16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
	I0906 18:30:41.207939   16537 api_server.go:204] freezer state: "THAWED"
	I0906 18:30:41.207969   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:41.212142   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:41.212210   16537 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0906 18:30:41.215883   16537 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:41.217306   16537 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0906 18:30:41.218611   16537 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0906 18:30:41.218653   16537 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0906 18:30:41.218877   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1972949581 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0906 18:30:41.232029   16537 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0906 18:30:41.232150   16537 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0906 18:30:41.232286   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2754409336 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0906 18:30:41.244167   16537 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 18:30:41.244194   16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0906 18:30:41.244302   16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube658414868 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 18:30:41.255928   16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 18:30:41.282052   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:41.632571   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:41.667180   16537 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0906 18:30:41.668704   16537 out.go:177] * Verifying gcp-auth addon...
	I0906 18:30:41.671280   16537 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0906 18:30:41.731286   16537 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 18:30:41.831935   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:42.132189   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:42.280601   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:42.631681   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:42.817481   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:43.103026   16537 pod_ready.go:98] pod "coredns-6f6b679f8f-mmqcm" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:42 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.154.0.4 HostIPs:[{IP:10.154.0.4}] P
odIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-06 18:30:34 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-06 18:30:35 +0000 UTC,FinishedAt:2024-09-06 18:30:41 +0000 UTC,ContainerID:docker://154a3b41d5fef27c2f10513a04bdb050783f3066027f560ca4f4f029fc6f2ad8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://154a3b41d5fef27c2f10513a04bdb050783f3066027f560ca4f4f029fc6f2ad8 Started:0xc00136f0f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001734400} {Name:kube-api-access-pphq5 MountPath:/var/run/secrets/kubernetes.io/serviceaccount Rea
dOnly:true RecursiveReadOnly:0xc001734410}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0906 18:30:43.103054   16537 pod_ready.go:82] duration metric: took 8.506813205s for pod "coredns-6f6b679f8f-mmqcm" in "kube-system" namespace to be "Ready" ...
	E0906 18:30:43.103067   16537 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-mmqcm" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:42 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.154.0.
4 HostIPs:[{IP:10.154.0.4}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-06 18:30:34 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-06 18:30:35 +0000 UTC,FinishedAt:2024-09-06 18:30:41 +0000 UTC,ContainerID:docker://154a3b41d5fef27c2f10513a04bdb050783f3066027f560ca4f4f029fc6f2ad8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://154a3b41d5fef27c2f10513a04bdb050783f3066027f560ca4f4f029fc6f2ad8 Started:0xc00136f0f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001734400} {Name:kube-api-access-pphq5 MountPath:/var/run/secrets/kub
ernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001734410}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0906 18:30:43.103080   16537 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qszb4" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:43.132118   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:43.280621   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:43.632947   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:43.793473   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:44.108529   16537 pod_ready.go:93] pod "coredns-6f6b679f8f-qszb4" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:44.108551   16537 pod_ready.go:82] duration metric: took 1.005461036s for pod "coredns-6f6b679f8f-qszb4" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:44.108560   16537 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:44.112057   16537 pod_ready.go:93] pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:44.112074   16537 pod_ready.go:82] duration metric: took 3.508277ms for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:44.112083   16537 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:44.115629   16537 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:44.115645   16537 pod_ready.go:82] duration metric: took 3.557233ms for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:44.115655   16537 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:44.119067   16537 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:44.119085   16537 pod_ready.go:82] duration metric: took 3.422424ms for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:44.119096   16537 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bpcdx" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:44.131298   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:44.281681   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:44.300150   16537 pod_ready.go:93] pod "kube-proxy-bpcdx" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:44.300169   16537 pod_ready.go:82] duration metric: took 181.066717ms for pod "kube-proxy-bpcdx" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:44.300179   16537 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:44.631894   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:44.700871   16537 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:44.700892   16537 pod_ready.go:82] duration metric: took 400.707515ms for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:44.700904   16537 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-992f8" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:44.781033   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:45.100064   16537 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-992f8" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:45.100097   16537 pod_ready.go:82] duration metric: took 399.185487ms for pod "nvidia-device-plugin-daemonset-992f8" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:45.100110   16537 pod_ready.go:39] duration metric: took 10.520892482s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:30:45.100135   16537 api_server.go:52] waiting for apiserver process to appear ...
	I0906 18:30:45.100201   16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:45.116434   16537 api_server.go:72] duration metric: took 11.100738952s to wait for apiserver process to appear ...
	I0906 18:30:45.116467   16537 api_server.go:88] waiting for apiserver healthz status ...
	I0906 18:30:45.116489   16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0906 18:30:45.119934   16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0906 18:30:45.120729   16537 api_server.go:141] control plane version: v1.31.0
	I0906 18:30:45.120748   16537 api_server.go:131] duration metric: took 4.274409ms to wait for apiserver health ...
	I0906 18:30:45.120756   16537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 18:30:45.132003   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:45.280778   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:45.304970   16537 system_pods.go:59] 17 kube-system pods found
	I0906 18:30:45.304999   16537 system_pods.go:61] "coredns-6f6b679f8f-qszb4" [d2454283-5b35-4985-9435-66f20cce3bef] Running
	I0906 18:30:45.305008   16537 system_pods.go:61] "csi-hostpath-attacher-0" [1734d6de-d709-4f90-814d-4ff9e56e98ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 18:30:45.305014   16537 system_pods.go:61] "csi-hostpath-resizer-0" [11950ea3-6ebf-45da-977a-e9492c7efbf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 18:30:45.305023   16537 system_pods.go:61] "csi-hostpathplugin-vj2d9" [bc23e849-db9a-4bb7-b7ca-ddb2fef143b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 18:30:45.305028   16537 system_pods.go:61] "etcd-ubuntu-20-agent-9" [33046595-0146-42d7-bdfa-98907e349e30] Running
	I0906 18:30:45.305032   16537 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-9" [259f6f47-ac30-4ced-956e-7b622b01486e] Running
	I0906 18:30:45.305037   16537 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-9" [a95cb8b3-5f88-424e-ba5b-aa1515570850] Running
	I0906 18:30:45.305040   16537 system_pods.go:61] "kube-proxy-bpcdx" [8e23e962-188d-400f-937e-be6e054a9f3a] Running
	I0906 18:30:45.305045   16537 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-9" [2e5e96d0-74b6-4dd5-853e-0a77544068a3] Running
	I0906 18:30:45.305051   16537 system_pods.go:61] "metrics-server-84c5f94fbc-9xnx2" [8878e996-0060-4707-ae80-56a0f78c6d2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 18:30:45.305055   16537 system_pods.go:61] "nvidia-device-plugin-daemonset-992f8" [8b1041db-3dd1-4e54-b9f6-2006b5cfd9e8] Running
	I0906 18:30:45.305060   16537 system_pods.go:61] "registry-6fb4cdfc84-v2hm7" [e6f429ae-9168-48c6-8e02-968ce47780ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 18:30:45.305065   16537 system_pods.go:61] "registry-proxy-lp46v" [6d764de2-9241-4f56-9564-ae56133efa57] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 18:30:45.305077   16537 system_pods.go:61] "snapshot-controller-56fcc65765-6c22x" [559b5a4e-9454-4406-ab38-ac3d9cf3d36f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:45.305087   16537 system_pods.go:61] "snapshot-controller-56fcc65765-dlght" [9cca90da-19cc-47fb-a0af-622e17b6bf43] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:45.305094   16537 system_pods.go:61] "storage-provisioner" [1a63b402-53de-4e52-8e77-0230c672502a] Running
	I0906 18:30:45.305102   16537 system_pods.go:61] "tiller-deploy-b48cc5f79-qjchh" [683f5399-2294-48a0-b4b9-f13807b03c3a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0906 18:30:45.305111   16537 system_pods.go:74] duration metric: took 184.34887ms to wait for pod list to return data ...
	I0906 18:30:45.305125   16537 default_sa.go:34] waiting for default service account to be created ...
	I0906 18:30:45.500311   16537 default_sa.go:45] found service account: "default"
	I0906 18:30:45.500337   16537 default_sa.go:55] duration metric: took 195.205319ms for default service account to be created ...
	I0906 18:30:45.500349   16537 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 18:30:45.632739   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:45.705860   16537 system_pods.go:86] 17 kube-system pods found
	I0906 18:30:45.705967   16537 system_pods.go:89] "coredns-6f6b679f8f-qszb4" [d2454283-5b35-4985-9435-66f20cce3bef] Running
	I0906 18:30:45.705992   16537 system_pods.go:89] "csi-hostpath-attacher-0" [1734d6de-d709-4f90-814d-4ff9e56e98ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 18:30:45.705999   16537 system_pods.go:89] "csi-hostpath-resizer-0" [11950ea3-6ebf-45da-977a-e9492c7efbf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 18:30:45.706006   16537 system_pods.go:89] "csi-hostpathplugin-vj2d9" [bc23e849-db9a-4bb7-b7ca-ddb2fef143b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 18:30:45.706020   16537 system_pods.go:89] "etcd-ubuntu-20-agent-9" [33046595-0146-42d7-bdfa-98907e349e30] Running
	I0906 18:30:45.706026   16537 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [259f6f47-ac30-4ced-956e-7b622b01486e] Running
	I0906 18:30:45.706032   16537 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [a95cb8b3-5f88-424e-ba5b-aa1515570850] Running
	I0906 18:30:45.706036   16537 system_pods.go:89] "kube-proxy-bpcdx" [8e23e962-188d-400f-937e-be6e054a9f3a] Running
	I0906 18:30:45.706040   16537 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [2e5e96d0-74b6-4dd5-853e-0a77544068a3] Running
	I0906 18:30:45.706045   16537 system_pods.go:89] "metrics-server-84c5f94fbc-9xnx2" [8878e996-0060-4707-ae80-56a0f78c6d2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 18:30:45.706049   16537 system_pods.go:89] "nvidia-device-plugin-daemonset-992f8" [8b1041db-3dd1-4e54-b9f6-2006b5cfd9e8] Running
	I0906 18:30:45.706056   16537 system_pods.go:89] "registry-6fb4cdfc84-v2hm7" [e6f429ae-9168-48c6-8e02-968ce47780ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 18:30:45.706062   16537 system_pods.go:89] "registry-proxy-lp46v" [6d764de2-9241-4f56-9564-ae56133efa57] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 18:30:45.706075   16537 system_pods.go:89] "snapshot-controller-56fcc65765-6c22x" [559b5a4e-9454-4406-ab38-ac3d9cf3d36f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:45.706081   16537 system_pods.go:89] "snapshot-controller-56fcc65765-dlght" [9cca90da-19cc-47fb-a0af-622e17b6bf43] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:45.706097   16537 system_pods.go:89] "storage-provisioner" [1a63b402-53de-4e52-8e77-0230c672502a] Running
	I0906 18:30:45.706113   16537 system_pods.go:89] "tiller-deploy-b48cc5f79-qjchh" [683f5399-2294-48a0-b4b9-f13807b03c3a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0906 18:30:45.706123   16537 system_pods.go:126] duration metric: took 205.768721ms to wait for k8s-apps to be running ...
	I0906 18:30:45.706133   16537 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 18:30:45.706173   16537 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:30:45.720877   16537 system_svc.go:56] duration metric: took 14.733045ms WaitForService to wait for kubelet
	I0906 18:30:45.720907   16537 kubeadm.go:582] duration metric: took 11.70521539s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:30:45.720928   16537 node_conditions.go:102] verifying NodePressure condition ...
	I0906 18:30:45.780728   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:45.900159   16537 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0906 18:30:45.900184   16537 node_conditions.go:123] node cpu capacity is 8
	I0906 18:30:45.900195   16537 node_conditions.go:105] duration metric: took 179.262466ms to run NodePressure ...
	I0906 18:30:45.900205   16537 start.go:241] waiting for startup goroutines ...
	I0906 18:30:46.131671   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:46.281155   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:46.634493   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:46.781907   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:47.132256   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:47.280753   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:47.632320   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:47.781139   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:48.132335   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:48.281234   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:48.650167   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:48.851848   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:49.131131   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:49.280540   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:49.632739   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:49.780854   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.131634   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:50.280907   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.631720   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:50.780374   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:51.131906   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:51.280836   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:51.633011   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:51.781740   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.133432   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:52.281146   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.632269   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:52.781364   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:53.132651   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:53.314521   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:53.632561   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:53.781014   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:54.132063   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:54.280784   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:54.631816   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:54.780901   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.131565   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.280780   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.631597   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.780852   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:56.131345   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:56.280568   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:56.632951   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:56.833590   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.133221   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.281305   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.631542   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.780668   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:58.132997   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:58.280706   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:58.632005   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:58.781079   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.132274   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:59.280823   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.631912   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:59.780748   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:00.131059   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:00.280622   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:00.631713   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:00.780760   16537 kapi.go:107] duration metric: took 25.503596012s to wait for kubernetes.io/minikube-addons=registry ...
	I0906 18:31:01.132962   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:01.631568   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.132718   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.631781   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:03.131889   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:03.631583   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:04.131428   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:04.631994   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.132349   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.632491   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:06.131360   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:06.631800   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.132267   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.632030   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:08.132673   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:08.632551   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:09.131606   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:09.632354   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.132139   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.632560   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:11.131233   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:11.631172   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:12.131879   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:12.631748   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:13.131552   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:13.632819   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:14.131812   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:14.632351   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:15.131838   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:15.631985   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:16.131713   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:16.632422   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:17.131820   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:17.632622   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:18.132723   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:18.632698   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:19.131433   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:19.632440   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:20.132183   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:20.632883   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:21.131836   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:21.631808   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:22.132365   16537 kapi.go:107] duration metric: took 44.505042987s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0906 18:32:03.675254   16537 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 18:32:03.675278   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:04.174612   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:04.674317   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:05.174241   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:05.674391   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:06.174498   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:06.674882   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:07.175050   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:07.674996   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:08.174912   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:08.674806   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:09.175022   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:09.675035   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:10.174854   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:10.675101   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:11.174985   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:11.675536   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:12.174427   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:12.674452   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:13.174533   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:13.674752   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:14.174885   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:14.674691   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:15.195985   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:15.674436   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:16.174611   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:16.675061   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:17.174589   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:17.674270   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:18.174304   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:18.674706   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:19.174682   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:19.675405   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:20.173993   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:20.675601   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:21.174499   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:21.674929   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:22.174795   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:22.674466   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:23.174309   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:23.674395   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:24.174455   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:24.674512   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:25.174416   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:25.674615   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:26.174926   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:26.674163   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:27.174245   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:27.674613   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:28.174905   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:28.675471   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:29.176051   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:29.675505   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:30.174329   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:30.674365   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:31.174450   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:31.674947   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:32.174572   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:32.674173   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:33.175001   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:33.675022   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:34.174166   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:34.675035   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:35.174485   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:35.674579   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:36.174702   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:36.675070   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:37.174457   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:37.674870   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:38.175256   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:38.674495   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:39.174406   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:39.674525   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:40.174338   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:40.674043   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:41.175403   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:41.674761   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:42.174476   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:42.674362   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:43.174492   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:43.674890   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:44.174997   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:44.674743   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:45.174711   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:45.674531   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:46.175188   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:46.674242   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:47.174259   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:47.673996   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:48.175456   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:48.674266   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:49.173913   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:49.675495   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:50.174917   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:50.675013   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:51.174902   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:51.675649   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:52.174573   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:52.674949   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:53.174763   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:53.675014   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:54.174980   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:54.677768   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:55.174510   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:55.674480   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:56.174895   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:56.674286   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:57.174332   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:57.674659   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:58.175030   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:58.675369   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:59.174377   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:59.674634   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:00.174734   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:00.674736   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:01.174432   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:01.675015   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:02.174693   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:02.675162   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:03.174196   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:03.675158   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:04.174832   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:04.674847   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:05.174542   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:05.675003   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:06.174948   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:06.675579   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:07.174481   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:07.675250   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:08.174971   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:08.674659   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:09.174871   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:09.675102   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:10.175060   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:10.674995   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:11.175206   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:11.674281   16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:12.175025   16537 kapi.go:107] duration metric: took 2m30.503741494s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0906 18:33:12.177001   16537 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0906 18:33:12.178607   16537 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0906 18:33:12.179971   16537 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0906 18:33:12.181533   16537 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, helm-tiller, metrics-server, storage-provisioner, storage-provisioner-rancher, yakd, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0906 18:33:12.182871   16537 addons.go:510] duration metric: took 2m38.211585206s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass helm-tiller metrics-server storage-provisioner storage-provisioner-rancher yakd inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0906 18:33:12.182921   16537 start.go:246] waiting for cluster config update ...
	I0906 18:33:12.182942   16537 start.go:255] writing updated cluster config ...
	I0906 18:33:12.183213   16537 exec_runner.go:51] Run: rm -f paused
	I0906 18:33:12.226992   16537 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 18:33:12.228930   16537 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-31 10:18:00 UTC, end at Fri 2024-09-06 18:43:05 UTC. --
	Sep 06 18:36:54 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:36:54.442082840Z" level=info msg="ignoring event" container=45e8b3e6edfd9c9e36e4f61d4c189f85a95541a66a7dfe092c6dfbeed86d61e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:39:32 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:39:32.000567595Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 06 18:39:32 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:39:32.002987623Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 06 18:41:55 ubuntu-20-agent-9 cri-dockerd[17083]: time="2024-09-06T18:41:55Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 06 18:41:57 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:41:57.224981505Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 06 18:41:57 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:41:57.224981968Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 06 18:41:57 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:41:57.226929562Z" level=error msg="Error running exec 4c2d34e30d2301c2234aa87ab1c3af7254beea053a6a35eb74e0882a5133546b in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 06 18:41:57 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:41:57.280523261Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 06 18:41:57 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:41:57.280548189Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 06 18:41:57 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:41:57.282675263Z" level=error msg="Error running exec b661f549806c500daf7d46af17da2b156c2e3a329159678119f4b7f9822ecbd9 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 06 18:41:57 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:41:57.453734589Z" level=info msg="ignoring event" container=13d13599bb3bc1088344955566b325e55df92da6557beeaf56942782bccd3e7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:42:00 ubuntu-20-agent-9 cri-dockerd[17083]: time="2024-09-06T18:42:00Z" level=error msg="error getting RW layer size for container ID '45e8b3e6edfd9c9e36e4f61d4c189f85a95541a66a7dfe092c6dfbeed86d61e3': Error response from daemon: No such container: 45e8b3e6edfd9c9e36e4f61d4c189f85a95541a66a7dfe092c6dfbeed86d61e3"
	Sep 06 18:42:00 ubuntu-20-agent-9 cri-dockerd[17083]: time="2024-09-06T18:42:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID '45e8b3e6edfd9c9e36e4f61d4c189f85a95541a66a7dfe092c6dfbeed86d61e3'"
	Sep 06 18:42:04 ubuntu-20-agent-9 cri-dockerd[17083]: time="2024-09-06T18:42:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef97bb9f2d83a276492bfe7d078016390ee9a5387f9e635e646007a22fa9b775/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 06 18:42:04 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:42:04.943298585Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 06 18:42:04 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:42:04.945588707Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 06 18:42:20 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:42:20.004859184Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 06 18:42:20 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:42:20.007470736Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 06 18:42:47 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:42:47.003878533Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 06 18:42:47 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:42:47.006327360Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 06 18:43:04 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:43:04.392466865Z" level=info msg="ignoring event" container=ef97bb9f2d83a276492bfe7d078016390ee9a5387f9e635e646007a22fa9b775 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:04 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:43:04.651237093Z" level=info msg="ignoring event" container=cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:04 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:43:04.719123085Z" level=info msg="ignoring event" container=f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:04 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:43:04.788400901Z" level=info msg="ignoring event" container=b2aa1645ab57f3a3cdf56b5f85885f48894a6c8cd15be305624149805f1fce7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:04 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:43:04.891750101Z" level=info msg="ignoring event" container=8f24dc2a208c77aeeaa923aa38dcc92211f8c23d1c7e9794befbf58731566f80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	13d13599bb3bc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            About a minute ago   Exited              gadget                                   7                   fc0c10c9db065       gadget-jbvsd
	9e901f66a10e4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago        Running             gcp-auth                                 0                   216533415c82b       gcp-auth-89d5ffd79-ppkkk
	43a580f7d3080       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago       Running             csi-snapshotter                          0                   ce1af66ffb7e1       csi-hostpathplugin-vj2d9
	61e25d0882034       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago       Running             csi-provisioner                          0                   ce1af66ffb7e1       csi-hostpathplugin-vj2d9
	6fb4e180217f1       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago       Running             liveness-probe                           0                   ce1af66ffb7e1       csi-hostpathplugin-vj2d9
	43a2a792132a1       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago       Running             hostpath                                 0                   ce1af66ffb7e1       csi-hostpathplugin-vj2d9
	254e95b2101e2       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago       Running             node-driver-registrar                    0                   ce1af66ffb7e1       csi-hostpathplugin-vj2d9
	1a6dd9016ca8e       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago       Running             csi-resizer                              0                   e84cdcc4a4a03       csi-hostpath-resizer-0
	baa9128400027       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago       Running             csi-external-health-monitor-controller   0                   ce1af66ffb7e1       csi-hostpathplugin-vj2d9
	1c73fb968274b       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago       Running             csi-attacher                             0                   8f560ac8f7195       csi-hostpath-attacher-0
	381f8618a360d       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago       Running             volume-snapshot-controller               0                   290bcb16f425b       snapshot-controller-56fcc65765-dlght
	40ab53c29a72b       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago       Running             volume-snapshot-controller               0                   d56a668f399a0       snapshot-controller-56fcc65765-6c22x
	d588c3dde7304       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       12 minutes ago       Running             local-path-provisioner                   0                   36539825ade8b       local-path-provisioner-86d989889c-slpgf
	63ab37da08e18       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        12 minutes ago       Running             yakd                                     0                   a03de44fd66e6       yakd-dashboard-67d98fc6b-k9hqw
	f6083c47174f5       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              12 minutes ago       Exited              registry-proxy                           0                   8f24dc2a208c7       registry-proxy-lp46v
	875be12584b5c       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  12 minutes ago       Running             tiller                                   0                   018f5218c7d09       tiller-deploy-b48cc5f79-qjchh
	cc64c892d533c       registry@sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d                                                             12 minutes ago       Exited              registry                                 0                   b2aa1645ab57f       registry-6fb4cdfc84-v2hm7
	e12546717979e       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        12 minutes ago       Running             metrics-server                           0                   07510fb077e82       metrics-server-84c5f94fbc-9xnx2
	db791af5a2bf3       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               12 minutes ago       Running             cloud-spanner-emulator                   0                   7e82da2feb472       cloud-spanner-emulator-769b77f747-q2pqv
	89bd2292a8f3a       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     12 minutes ago       Running             nvidia-device-plugin-ctr                 0                   1576fc1afcd47       nvidia-device-plugin-daemonset-992f8
	a203a84e71cb9       6e38f40d628db                                                                                                                                12 minutes ago       Running             storage-provisioner                      0                   16d1656f00ae7       storage-provisioner
	c23f2594a3e7f       cbb01a7bd410d                                                                                                                                12 minutes ago       Running             coredns                                  0                   76d7b635d5c52       coredns-6f6b679f8f-qszb4
	90bc0c9443152       ad83b2ca7b09e                                                                                                                                12 minutes ago       Running             kube-proxy                               0                   49af432d00aae       kube-proxy-bpcdx
	ed8ba4cf73612       045733566833c                                                                                                                                12 minutes ago       Running             kube-controller-manager                  0                   033361b24f009       kube-controller-manager-ubuntu-20-agent-9
	e43fdf744ae9d       1766f54c897f0                                                                                                                                12 minutes ago       Running             kube-scheduler                           0                   6e8074a8346e1       kube-scheduler-ubuntu-20-agent-9
	a2029506d4d12       604f5db92eaa8                                                                                                                                12 minutes ago       Running             kube-apiserver                           0                   bcf316f77232e       kube-apiserver-ubuntu-20-agent-9
	36661887fea2f       2e96e5913fc06                                                                                                                                12 minutes ago       Running             etcd                                     0                   7149a3a1b832b       etcd-ubuntu-20-agent-9
	
	
	==> coredns [c23f2594a3e7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37563 - 12641 "HINFO IN 774206148356767685.291074717509105252. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.014129384s
	[INFO] 10.244.0.24:52539 - 50572 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000291492s
	[INFO] 10.244.0.24:57143 - 27585 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000363221s
	[INFO] 10.244.0.24:33719 - 16169 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133656s
	[INFO] 10.244.0.24:33286 - 19949 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151096s
	[INFO] 10.244.0.24:44184 - 31175 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000072861s
	[INFO] 10.244.0.24:57925 - 47331 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118642s
	[INFO] 10.244.0.24:50953 - 11654 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.003799887s
	[INFO] 10.244.0.24:41444 - 44307 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.003869916s
	[INFO] 10.244.0.24:43269 - 30106 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003416574s
	[INFO] 10.244.0.24:49708 - 8189 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003728298s
	[INFO] 10.244.0.24:40200 - 19637 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004289346s
	[INFO] 10.244.0.24:59273 - 19615 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004343273s
	[INFO] 10.244.0.24:54328 - 52059 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002801115s
	[INFO] 10.244.0.24:42042 - 39807 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.003101451s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-9
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-9
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T18_30_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-9
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-9"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:30:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-9
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:43:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:39:11 +0000   Fri, 06 Sep 2024 18:30:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:39:11 +0000   Fri, 06 Sep 2024 18:30:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:39:11 +0000   Fri, 06 Sep 2024 18:30:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:39:11 +0000   Fri, 06 Sep 2024 18:30:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.154.0.4
	  Hostname:    ubuntu-20-agent-9
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                4894487b-7b30-e033-3a9d-c6f45b6c4cf8
	  Boot ID:                    3a37642b-ebc9-482a-807f-9b9abd72965a
	  Kernel Version:             5.15.0-1067-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     cloud-spanner-emulator-769b77f747-q2pqv      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-jbvsd                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-ppkkk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-qszb4                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-vj2d9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ubuntu-20-agent-9                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-ubuntu-20-agent-9             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-9    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-bpcdx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ubuntu-20-agent-9             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-9xnx2              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-992f8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-6c22x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-dlght         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 tiller-deploy-b48cc5f79-qjchh                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-slpgf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-k9hqw               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node ubuntu-20-agent-9 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node ubuntu-20-agent-9 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node ubuntu-20-agent-9 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node ubuntu-20-agent-9 event: Registered Node ubuntu-20-agent-9 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 2a f7 67 48 98 08 06
	[  +1.194419] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 b9 97 f7 84 25 08 06
	[  +0.046903] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff 4e 29 68 17 31 66 08 06
	[  +2.804668] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 76 3a e8 8a 9b 08 06
	[  +2.016476] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 b7 4a c6 b1 cf 08 06
	[  +2.300816] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 54 aa 8e 8a dd 08 06
	[  +5.305539] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 b1 d6 59 f5 47 08 06
	[  +0.422238] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 26 be 97 a2 7e 08 06
	[  +0.139849] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9e 33 22 6f 68 db 08 06
	[Sep 6 18:32] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 94 c5 69 77 17 08 06
	[  +0.041917] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff de 21 66 a4 1e 92 08 06
	[Sep 6 18:33] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2a 0b 82 05 5c 84 08 06
	[  +0.000447] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 62 aa 17 2e b3 08 06
	
	
	==> etcd [36661887fea2] <==
	{"level":"info","ts":"2024-09-06T18:30:24.810584Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T18:30:25.000402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-06T18:30:25.000461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-06T18:30:25.000482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a received MsgPreVoteResp from 82d4d36e40f9b4a at term 1"}
	{"level":"info","ts":"2024-09-06T18:30:25.000496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became candidate at term 2"}
	{"level":"info","ts":"2024-09-06T18:30:25.000504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a received MsgVoteResp from 82d4d36e40f9b4a at term 2"}
	{"level":"info","ts":"2024-09-06T18:30:25.000514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became leader at term 2"}
	{"level":"info","ts":"2024-09-06T18:30:25.000524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 82d4d36e40f9b4a elected leader 82d4d36e40f9b4a at term 2"}
	{"level":"info","ts":"2024-09-06T18:30:25.001467Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T18:30:25.001624Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T18:30:25.001637Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"82d4d36e40f9b4a","local-member-attributes":"{Name:ubuntu-20-agent-9 ClientURLs:[https://10.154.0.4:2379]}","request-path":"/0/members/82d4d36e40f9b4a/attributes","cluster-id":"7cf21852ad6c12ab","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T18:30:25.001656Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T18:30:25.001893Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T18:30:25.001916Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T18:30:25.002284Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7cf21852ad6c12ab","local-member-id":"82d4d36e40f9b4a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T18:30:25.002356Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T18:30:25.002381Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T18:30:25.002884Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T18:30:25.002930Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T18:30:25.003741Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T18:30:25.004039Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.154.0.4:2379"}
	{"level":"warn","ts":"2024-09-06T18:31:02.991362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.00846ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11189916514919458939 > lease_revoke:<id:1b4a91c89a0d2775>","response":"size:27"}
	{"level":"info","ts":"2024-09-06T18:40:25.503353Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1720}
	{"level":"info","ts":"2024-09-06T18:40:25.528219Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1720,"took":"24.363894ms","hash":2879840899,"current-db-size-bytes":8638464,"current-db-size":"8.6 MB","current-db-size-in-use-bytes":4452352,"current-db-size-in-use":"4.5 MB"}
	{"level":"info","ts":"2024-09-06T18:40:25.528279Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2879840899,"revision":1720,"compact-revision":-1}
	
	
	==> gcp-auth [9e901f66a10e] <==
	2024/09/06 18:33:11 GCP Auth Webhook started!
	2024/09/06 18:33:27 Ready to marshal response ...
	2024/09/06 18:33:27 Ready to write response ...
	2024/09/06 18:33:28 Ready to marshal response ...
	2024/09/06 18:33:28 Ready to write response ...
	2024/09/06 18:33:51 Ready to marshal response ...
	2024/09/06 18:33:51 Ready to write response ...
	2024/09/06 18:33:51 Ready to marshal response ...
	2024/09/06 18:33:51 Ready to write response ...
	2024/09/06 18:33:51 Ready to marshal response ...
	2024/09/06 18:33:51 Ready to write response ...
	2024/09/06 18:42:04 Ready to marshal response ...
	2024/09/06 18:42:04 Ready to write response ...
	
	
	==> kernel <==
	 18:43:05 up 25 min,  0 users,  load average: 0.62, 0.53, 0.42
	Linux ubuntu-20-agent-9 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [a2029506d4d1] <==
	W0906 18:32:03.644740       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.105.222:443: connect: connection refused
	E0906 18:32:03.644775       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.105.222:443: connect: connection refused" logger="UnhandledError"
	W0906 18:32:44.696688       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.105.222:443: connect: connection refused
	E0906 18:32:44.696723       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.105.222:443: connect: connection refused" logger="UnhandledError"
	W0906 18:32:44.707950       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.105.222:443: connect: connection refused
	E0906 18:32:44.707984       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.105.222:443: connect: connection refused" logger="UnhandledError"
	I0906 18:33:27.506439       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0906 18:33:27.524321       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0906 18:33:41.900369       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0906 18:33:41.910576       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0906 18:33:42.031470       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0906 18:33:42.033286       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0906 18:33:42.058926       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	E0906 18:33:42.064169       1 watch.go:250] "Unhandled Error" err="client disconnected" logger="UnhandledError"
	I0906 18:33:42.181636       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0906 18:33:42.213035       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0906 18:33:42.226073       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0906 18:33:42.278681       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0906 18:33:42.932416       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0906 18:33:43.111328       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0906 18:33:43.182740       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0906 18:33:43.182822       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0906 18:33:43.182754       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0906 18:33:43.279648       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0906 18:33:43.460455       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [ed8ba4cf7361] <==
	W0906 18:42:04.665852       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:04.665901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:06.103775       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:06.103823       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:11.211356       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:11.211398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:22.811720       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:22.811773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:23.776903       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:23.776946       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:31.253293       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:31.253333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:45.028543       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:45.028584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:50.457768       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:50.457819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:53.213376       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:53.213420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:55.674120       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:55.674167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:57.055487       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:57.055527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:59.910064       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:59.910104       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:43:04.616163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="7.958µs"
	
	
	==> kube-proxy [90bc0c944315] <==
	I0906 18:30:35.030348       1 server_linux.go:66] "Using iptables proxy"
	I0906 18:30:35.251292       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.154.0.4"]
	E0906 18:30:35.251362       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:30:35.368293       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0906 18:30:35.368361       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:30:35.372075       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:30:35.372461       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:30:35.372488       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:30:35.374854       1 config.go:197] "Starting service config controller"
	I0906 18:30:35.374894       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:30:35.374928       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:30:35.374935       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:30:35.375547       1 config.go:326] "Starting node config controller"
	I0906 18:30:35.375557       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:30:35.479794       1 shared_informer.go:320] Caches are synced for node config
	I0906 18:30:35.479836       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:30:35.479879       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e43fdf744ae9] <==
	W0906 18:30:26.357136       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 18:30:26.357434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:26.357441       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0906 18:30:26.357122       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:26.357503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0906 18:30:26.357468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:27.224342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 18:30:27.224385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:27.268121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 18:30:27.268159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:27.356890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 18:30:27.356942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:27.397211       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 18:30:27.397255       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:27.411638       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 18:30:27.411682       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:27.481244       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:27.481297       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:27.487576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 18:30:27.487651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:27.502109       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 18:30:27.502151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:27.534546       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:27.534583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0906 18:30:27.854543       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-31 10:18:00 UTC, end at Fri 2024-09-06 18:43:05 UTC. --
	Sep 06 18:42:50 ubuntu-20-agent-9 kubelet[17964]: E0906 18:42:50.850958   17964 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0b1ded35-01c2-48b1-88e5-1582b38ae658"
	Sep 06 18:42:56 ubuntu-20-agent-9 kubelet[17964]: I0906 18:42:56.848473   17964 scope.go:117] "RemoveContainer" containerID="13d13599bb3bc1088344955566b325e55df92da6557beeaf56942782bccd3e7e"
	Sep 06 18:42:56 ubuntu-20-agent-9 kubelet[17964]: E0906 18:42:56.848658   17964 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-jbvsd_gadget(3fab1905-e39d-4c14-9987-dedecd9e7b17)\"" pod="gadget/gadget-jbvsd" podUID="3fab1905-e39d-4c14-9987-dedecd9e7b17"
	Sep 06 18:42:59 ubuntu-20-agent-9 kubelet[17964]: E0906 18:42:59.851330   17964 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="4192c3b4-f319-4822-b46f-6e4f1d417e49"
	Sep 06 18:43:01 ubuntu-20-agent-9 kubelet[17964]: E0906 18:43:01.850546   17964 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0b1ded35-01c2-48b1-88e5-1582b38ae658"
	Sep 06 18:43:04 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:04.599713   17964 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4192c3b4-f319-4822-b46f-6e4f1d417e49-gcp-creds\") pod \"4192c3b4-f319-4822-b46f-6e4f1d417e49\" (UID: \"4192c3b4-f319-4822-b46f-6e4f1d417e49\") "
	Sep 06 18:43:04 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:04.599771   17964 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg2rg\" (UniqueName: \"kubernetes.io/projected/4192c3b4-f319-4822-b46f-6e4f1d417e49-kube-api-access-pg2rg\") pod \"4192c3b4-f319-4822-b46f-6e4f1d417e49\" (UID: \"4192c3b4-f319-4822-b46f-6e4f1d417e49\") "
	Sep 06 18:43:04 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:04.599840   17964 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4192c3b4-f319-4822-b46f-6e4f1d417e49-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "4192c3b4-f319-4822-b46f-6e4f1d417e49" (UID: "4192c3b4-f319-4822-b46f-6e4f1d417e49"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 06 18:43:04 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:04.601707   17964 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4192c3b4-f319-4822-b46f-6e4f1d417e49-kube-api-access-pg2rg" (OuterVolumeSpecName: "kube-api-access-pg2rg") pod "4192c3b4-f319-4822-b46f-6e4f1d417e49" (UID: "4192c3b4-f319-4822-b46f-6e4f1d417e49"). InnerVolumeSpecName "kube-api-access-pg2rg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:43:04 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:04.703121   17964 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4192c3b4-f319-4822-b46f-6e4f1d417e49-gcp-creds\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 06 18:43:04 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:04.703152   17964 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pg2rg\" (UniqueName: \"kubernetes.io/projected/4192c3b4-f319-4822-b46f-6e4f1d417e49-kube-api-access-pg2rg\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.007988   17964 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s7cl\" (UniqueName: \"kubernetes.io/projected/e6f429ae-9168-48c6-8e02-968ce47780ae-kube-api-access-9s7cl\") pod \"e6f429ae-9168-48c6-8e02-968ce47780ae\" (UID: \"e6f429ae-9168-48c6-8e02-968ce47780ae\") "
	Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.010563   17964 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6f429ae-9168-48c6-8e02-968ce47780ae-kube-api-access-9s7cl" (OuterVolumeSpecName: "kube-api-access-9s7cl") pod "e6f429ae-9168-48c6-8e02-968ce47780ae" (UID: "e6f429ae-9168-48c6-8e02-968ce47780ae"). InnerVolumeSpecName "kube-api-access-9s7cl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.108913   17964 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6df5\" (UniqueName: \"kubernetes.io/projected/6d764de2-9241-4f56-9564-ae56133efa57-kube-api-access-h6df5\") pod \"6d764de2-9241-4f56-9564-ae56133efa57\" (UID: \"6d764de2-9241-4f56-9564-ae56133efa57\") "
	Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.109162   17964 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9s7cl\" (UniqueName: \"kubernetes.io/projected/e6f429ae-9168-48c6-8e02-968ce47780ae-kube-api-access-9s7cl\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.110854   17964 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d764de2-9241-4f56-9564-ae56133efa57-kube-api-access-h6df5" (OuterVolumeSpecName: "kube-api-access-h6df5") pod "6d764de2-9241-4f56-9564-ae56133efa57" (UID: "6d764de2-9241-4f56-9564-ae56133efa57"). InnerVolumeSpecName "kube-api-access-h6df5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.209556   17964 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-h6df5\" (UniqueName: \"kubernetes.io/projected/6d764de2-9241-4f56-9564-ae56133efa57-kube-api-access-h6df5\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.217902   17964 scope.go:117] "RemoveContainer" containerID="f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9"
	Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.237003   17964 scope.go:117] "RemoveContainer" containerID="f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9"
	Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: E0906 18:43:05.237820   17964 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9" containerID="f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9"
	Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.237857   17964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9"} err="failed to get container status \"f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9\": rpc error: code = Unknown desc = Error response from daemon: No such container: f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9"
	Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.237887   17964 scope.go:117] "RemoveContainer" containerID="cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3"
	Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.262217   17964 scope.go:117] "RemoveContainer" containerID="cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3"
	Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: E0906 18:43:05.263147   17964 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3" containerID="cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3"
	Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.263192   17964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3"} err="failed to get container status \"cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3\": rpc error: code = Unknown desc = Error response from daemon: No such container: cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3"
	
	
	==> storage-provisioner [a203a84e71cb] <==
	I0906 18:30:36.635753       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 18:30:36.654380       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 18:30:36.654454       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 18:30:36.662742       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 18:30:36.663810       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_0f044dc0-9e66-4411-ba4e-a51be2401fa7!
	I0906 18:30:36.667248       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27d28267-8c43-487c-9686-90da8e1c914d", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-9_0f044dc0-9e66-4411-ba4e-a51be2401fa7 became leader
	I0906 18:30:36.764617       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_0f044dc0-9e66-4411-ba4e-a51be2401fa7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-9/10.154.0.4
	Start Time:       Fri, 06 Sep 2024 18:33:51 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5qkr9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5qkr9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m14s                   default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-9
	  Normal   Pulling    7m50s (x4 over 9m13s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m49s (x4 over 9m13s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m49s (x4 over 9m13s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m24s (x6 over 9m13s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m11s (x20 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.80s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Non-zero exit: out/minikube-linux-amd64 license: exit status 40 (560.74103ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2289: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.56s)

                                                
                                    

Test pass (104/167)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 1.79
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 1.47
15 TestDownloadOnly/v1.31.0/binaries 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.11
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.56
22 TestOffline 44.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 175.84
29 TestAddons/serial/Volcano 39.45
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 10.46
36 TestAddons/parallel/MetricsServer 6.38
37 TestAddons/parallel/HelmTiller 9.31
39 TestAddons/parallel/CSI 41.26
40 TestAddons/parallel/Headlamp 16.94
41 TestAddons/parallel/CloudSpanner 5.26
43 TestAddons/parallel/NvidiaDevicePlugin 5.23
44 TestAddons/parallel/Yakd 11.42
45 TestAddons/StoppedEnableDisable 10.77
47 TestCertExpiration 228.74
58 TestFunctional/serial/CopySyncFile 0
59 TestFunctional/serial/StartWithProxy 24.92
60 TestFunctional/serial/AuditLog 0
61 TestFunctional/serial/SoftStart 28.18
62 TestFunctional/serial/KubeContext 0.04
63 TestFunctional/serial/KubectlGetPods 0.06
65 TestFunctional/serial/MinikubeKubectlCmd 0.1
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
67 TestFunctional/serial/ExtraConfig 38.24
68 TestFunctional/serial/ComponentHealth 0.07
69 TestFunctional/serial/LogsCmd 0.82
70 TestFunctional/serial/LogsFileCmd 0.84
71 TestFunctional/serial/InvalidService 4.32
73 TestFunctional/parallel/ConfigCmd 0.26
74 TestFunctional/parallel/DashboardCmd 3.36
75 TestFunctional/parallel/DryRun 0.15
76 TestFunctional/parallel/InternationalLanguage 0.08
77 TestFunctional/parallel/StatusCmd 0.42
80 TestFunctional/parallel/ProfileCmd/profile_not_create 0.23
81 TestFunctional/parallel/ProfileCmd/profile_list 0.22
82 TestFunctional/parallel/ProfileCmd/profile_json_output 0.22
84 TestFunctional/parallel/ServiceCmd/DeployApp 10.14
85 TestFunctional/parallel/ServiceCmd/List 0.34
86 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
87 TestFunctional/parallel/ServiceCmd/HTTPS 0.15
88 TestFunctional/parallel/ServiceCmd/Format 0.15
89 TestFunctional/parallel/ServiceCmd/URL 0.15
90 TestFunctional/parallel/ServiceCmdConnect 6.3
91 TestFunctional/parallel/AddonsCmd 0.11
92 TestFunctional/parallel/PersistentVolumeClaim 21.14
105 TestFunctional/parallel/MySQL 21.82
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 13.79
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 13.85
114 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/Version/short 0.04
119 TestFunctional/parallel/Version/components 0.39
121 TestFunctional/delete_echo-server_images 0.03
122 TestFunctional/delete_my-image_image 0.02
123 TestFunctional/delete_minikube_cached_images 0.01
128 TestImageBuild/serial/Setup 14.91
129 TestImageBuild/serial/NormalBuild 2.81
130 TestImageBuild/serial/BuildWithBuildArg 0.92
131 TestImageBuild/serial/BuildWithDockerIgnore 0.75
132 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.72
136 TestJSONOutput/start/Command 29.83
137 TestJSONOutput/start/Audit 0
139 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
140 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
142 TestJSONOutput/pause/Command 0.49
143 TestJSONOutput/pause/Audit 0
145 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/unpause/Command 0.43
149 TestJSONOutput/unpause/Audit 0
151 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/stop/Command 5.32
155 TestJSONOutput/stop/Audit 0
157 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
159 TestErrorJSONOutput 0.2
164 TestMainNoArgs 0.04
165 TestMinikubeProfile 34.99
173 TestPause/serial/Start 30.16
174 TestPause/serial/SecondStartNoReconfiguration 30.94
175 TestPause/serial/Pause 0.53
176 TestPause/serial/VerifyStatus 0.13
177 TestPause/serial/Unpause 0.43
178 TestPause/serial/PauseAgain 0.54
179 TestPause/serial/DeletePaused 1.7
180 TestPause/serial/VerifyDeletedResources 0.06
194 TestRunningBinaryUpgrade 78.56
196 TestStoppedBinaryUpgrade/Setup 2.29
197 TestStoppedBinaryUpgrade/Upgrade 51.28
198 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
199 TestKubernetesUpgrade 312.12
x
+
TestDownloadOnly/v1.20.0/json-events (1.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.793244151s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (1.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (55.766042ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:27
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:27.037871   12750 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:27.038136   12750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:27.038146   12750 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:27.038151   12750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:27.038314   12750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-5859/.minikube/bin
	W0906 18:29:27.038436   12750 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19576-5859/.minikube/config/config.json: open /home/jenkins/minikube-integration/19576-5859/.minikube/config/config.json: no such file or directory
	I0906 18:29:27.039118   12750 out.go:352] Setting JSON to true
	I0906 18:29:27.040009   12750 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":709,"bootTime":1725646658,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:29:27.040062   12750 start.go:139] virtualization: kvm guest
	I0906 18:29:27.042278   12750 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0906 18:29:27.042365   12750 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19576-5859/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 18:29:27.042405   12750 notify.go:220] Checking for updates...
	I0906 18:29:27.043731   12750 out.go:169] MINIKUBE_LOCATION=19576
	I0906 18:29:27.044911   12750 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:27.046160   12750 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19576-5859/kubeconfig
	I0906 18:29:27.047449   12750 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-5859/.minikube
	I0906 18:29:27.048573   12750 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (1.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.473752943s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (1.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
--- PASS: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (57.646574ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:29
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:29.123896   12902 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:29.124000   12902 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:29.124006   12902 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:29.124011   12902 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:29.124154   12902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-5859/.minikube/bin
	I0906 18:29:29.124655   12902 out.go:352] Setting JSON to true
	I0906 18:29:29.126247   12902 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":711,"bootTime":1725646658,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:29:29.126314   12902 start.go:139] virtualization: kvm guest
	I0906 18:29:29.128090   12902 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0906 18:29:29.128204   12902 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19576-5859/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 18:29:29.128242   12902 notify.go:220] Checking for updates...
	I0906 18:29:29.129556   12902 out.go:169] MINIKUBE_LOCATION=19576
	I0906 18:29:29.130902   12902 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:29.132272   12902 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19576-5859/kubeconfig
	I0906 18:29:29.133524   12902 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-5859/.minikube
	I0906 18:29:29.134787   12902 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:41257 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (44.63s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (43.015637784s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.610524394s)
--- PASS: TestOffline (44.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (46.820526ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (46.404318ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (175.84s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller: (2m55.841630745s)
--- PASS: TestAddons/Setup (175.84s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.45s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 10.001083ms
addons_test.go:897: volcano-scheduler stabilized in 10.189696ms
addons_test.go:905: volcano-admission stabilized in 10.22358ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-zwcxn" [7d366af5-c0cc-4d5d-a515-971318561372] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00349921s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-9xk8j" [f7504107-305e-4a8e-8e34-6e89c224f3cf] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00384933s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-f5sfv" [c543adb5-bef7-4360-bfe4-13db3cde2e90] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003195831s
addons_test.go:932: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [511ef80e-ca7e-4f5a-8508-707ad0154563] Pending
helpers_test.go:344: "test-job-nginx-0" [511ef80e-ca7e-4f5a-8508-707ad0154563] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [511ef80e-ca7e-4f5a-8508-707ad0154563] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003632052s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.109013244s)
--- PASS: TestAddons/serial/Volcano (39.45s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.46s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jbvsd" [3fab1905-e39d-4c14-9987-dedecd9e7b17] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003755054s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.455619545s)
--- PASS: TestAddons/parallel/InspektorGadget (10.46s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.951636ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-9xnx2" [8878e996-0060-4707-ae80-56a0f78c6d2a] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003969406s
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.38s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.31s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.787628ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-qjchh" [683f5399-2294-48a0-b4b9-f13807b03c3a] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003938433s
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.016399364s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.20765ms
addons_test.go:570: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1fc6b477-1dbf-49ea-ad24-4a75e004f96f] Pending
helpers_test.go:344: "task-pv-pod" [1fc6b477-1dbf-49ea-ad24-4a75e004f96f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1fc6b477-1dbf-49ea-ad24-4a75e004f96f] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003895422s
addons_test.go:590: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b0df6dee-c26c-4618-aa77-150d24a75523] Pending
helpers_test.go:344: "task-pv-pod-restore" [b0df6dee-c26c-4618-aa77-150d24a75523] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b0df6dee-c26c-4618-aa77-150d24a75523] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003894976s
addons_test.go:632: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.309854749s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-72c98" [770e2ea1-4aea-456b-b852-2edb325d392a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-72c98" [770e2ea1-4aea-456b-b852-2edb325d392a] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003297067s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.410889671s)
--- PASS: TestAddons/parallel/Headlamp (16.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-q2pqv" [c135ad82-4134-4ca2-a78c-c879fddfa72d] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003530774s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.23s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-992f8" [8b1041db-3dd1-4e54-b9f6-2006b5cfd9e8] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003758375s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.23s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-k9hqw" [4d4a8187-2eef-419f-ba40-9ed35ae6fe86] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003241864s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.413688599s)
--- PASS: TestAddons/parallel/Yakd (11.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.77s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.472914191s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.77s)

                                                
                                    
x
+
TestCertExpiration (228.74s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.312875912s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (32.772322656s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.656353721s)
--- PASS: TestCertExpiration (228.74s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19576-5859/.minikube/files/etc/test/nested/copy/12738/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (24.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (24.920075256s)
--- PASS: TestFunctional/serial/StartWithProxy (24.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (28.180071682s)
functional_test.go:663: soft start took 28.180705771s for "minikube" cluster.
--- PASS: TestFunctional/serial/SoftStart (28.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.240549297s)
functional_test.go:761: restart took 38.240669576s for "minikube" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.82s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd3388966001/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.84s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (162.610251ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL           |
	|-----------|-------------|-------------|-------------------------|
	| default   | invalid-svc |          80 | http://10.154.0.4:30468 |
	|-----------|-------------|-------------|-------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (42.300139ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (42.35773ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (3.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/06 18:50:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 47977: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (3.36s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (78.47244ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-5859/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-5859/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:50:34.265100   48310 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:50:34.265212   48310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:50:34.265221   48310 out.go:358] Setting ErrFile to fd 2...
	I0906 18:50:34.265225   48310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:50:34.265378   48310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-5859/.minikube/bin
	I0906 18:50:34.265921   48310 out.go:352] Setting JSON to false
	I0906 18:50:34.266899   48310 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1976,"bootTime":1725646658,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:50:34.266962   48310 start.go:139] virtualization: kvm guest
	I0906 18:50:34.269223   48310 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0906 18:50:34.270464   48310 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19576-5859/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 18:50:34.270500   48310 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:50:34.270500   48310 notify.go:220] Checking for updates...
	I0906 18:50:34.272716   48310 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:50:34.273856   48310 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-5859/kubeconfig
	I0906 18:50:34.275090   48310 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-5859/.minikube
	I0906 18:50:34.276254   48310 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 18:50:34.277281   48310 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:50:34.278721   48310 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:50:34.279087   48310 exec_runner.go:51] Run: systemctl --version
	I0906 18:50:34.281588   48310 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:50:34.293784   48310 out.go:177] * Using the none driver based on existing profile
	I0906 18:50:34.295055   48310 start.go:297] selected driver: none
	I0906 18:50:34.295075   48310 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:50:34.295212   48310 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:50:34.295240   48310 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0906 18:50:34.295534   48310 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0906 18:50:34.297620   48310 out.go:201] 
	W0906 18:50:34.298730   48310 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0906 18:50:34.300000   48310 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (79.018866ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-5859/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-5859/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:50:34.418137   48341 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:50:34.418253   48341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:50:34.418262   48341 out.go:358] Setting ErrFile to fd 2...
	I0906 18:50:34.418267   48341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:50:34.418508   48341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-5859/.minikube/bin
	I0906 18:50:34.419051   48341 out.go:352] Setting JSON to false
	I0906 18:50:34.419968   48341 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1976,"bootTime":1725646658,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:50:34.420028   48341 start.go:139] virtualization: kvm guest
	I0906 18:50:34.422244   48341 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0906 18:50:34.423554   48341 out.go:177]   - MINIKUBE_LOCATION=19576
	W0906 18:50:34.423536   48341 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19576-5859/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 18:50:34.423584   48341 notify.go:220] Checking for updates...
	I0906 18:50:34.426199   48341 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:50:34.427631   48341 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-5859/kubeconfig
	I0906 18:50:34.429087   48341 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-5859/.minikube
	I0906 18:50:34.430500   48341 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 18:50:34.431673   48341 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:50:34.433290   48341 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:50:34.433566   48341 exec_runner.go:51] Run: systemctl --version
	I0906 18:50:34.435968   48341 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:50:34.446856   48341 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0906 18:50:34.448188   48341 start.go:297] selected driver: none
	I0906 18:50:34.448200   48341 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:50:34.448284   48341 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:50:34.448308   48341 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0906 18:50:34.448576   48341 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0906 18:50:34.450754   48341 out.go:201] 
	W0906 18:50:34.451975   48341 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0906 18:50:34.453127   48341 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "177.683976ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "44.644835ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "174.985006ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "45.063368ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-vszs8" [85ad681b-3195-47fa-a3d7-d788c8026012] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-vszs8" [85ad681b-3195-47fa-a3d7-d788c8026012] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003715572s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "324.070321ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.154.0.4:30600
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.154.0.4:30600
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bhjcj" [81f58314-50d7-4674-8595-4838e62b0ee9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bhjcj" [81f58314-50d7-4674-8595-4838e62b0ee9] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.003857479s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.154.0.4:32662
functional_test.go:1675: http://10.154.0.4:32662: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-bhjcj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.154.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.154.0.4:32662
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.30s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [85b9c118-bc9e-4b2c-8670-3f20e20a9f44] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003384841s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6276de8a-64a4-4d13-8b1a-ed3d83a9ad2d] Pending
helpers_test.go:344: "sp-pod" [6276de8a-64a4-4d13-8b1a-ed3d83a9ad2d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6276de8a-64a4-4d13-8b1a-ed3d83a9ad2d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003830488s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8848467f-9b2d-482f-ad9d-ee40fd289654] Pending
helpers_test.go:344: "sp-pod" [8848467f-9b2d-482f-ad9d-ee40fd289654] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8848467f-9b2d-482f-ad9d-ee40fd289654] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003667386s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.14s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-jzxp9" [958f6b8a-64e6-4007-b9ff-c336d87a3785] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-jzxp9" [958f6b8a-64e6-4007-b9ff-c336d87a3785] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.004347469s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-jzxp9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-jzxp9 -- mysql -ppassword -e "show databases;": exit status 1 (129.087981ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-jzxp9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-jzxp9 -- mysql -ppassword -e "show databases;": exit status 1 (147.714761ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-jzxp9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-jzxp9 -- mysql -ppassword -e "show databases;": exit status 1 (113.23168ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-jzxp9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.82s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.789287935s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (13.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.852445555s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (13.85s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.91158554s)
--- PASS: TestImageBuild/serial/Setup (14.91s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (2.811575634s)
--- PASS: TestImageBuild/serial/NormalBuild (2.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.72s)

                                                
                                    
x
+
TestJSONOutput/start/Command (29.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (29.826053533s)
--- PASS: TestJSONOutput/start/Command (29.83s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.32s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (5.320426234s)
--- PASS: TestJSONOutput/stop/Command (5.32s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.436688ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"54a22d6b-c71c-47ef-a4da-ea88785413dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"df59ea87-8148-417c-801d-2863bb6196f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19576"}}
	{"specversion":"1.0","id":"cb89fa68-d400-4712-8144-530adfc774e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1c07dd67-3213-4069-8a97-07f43d4d2908","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19576-5859/kubeconfig"}}
	{"specversion":"1.0","id":"7edb1303-c61d-4a29-bc16-0e20ae7a368d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-5859/.minikube"}}
	{"specversion":"1.0","id":"cd3502d5-0144-44e1-bd07-4208bba06498","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"011baaec-a673-4b2d-a31b-445f8b12ac0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8954d10d-7d67-4712-ad61-f0dfd36147f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (34.99s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (15.425953898s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.676278762s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.274912357s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (34.99s)

                                                
                                    
x
+
TestPause/serial/Start (30.16s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (30.161705751s)
--- PASS: TestPause/serial/Start (30.16s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (30.939792029s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.94s)

                                                
                                    
x
+
TestPause/serial/Pause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.53s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (128.399115ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.43s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.43s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.54s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.54s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.7s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.699361244s)
--- PASS: TestPause/serial/DeletePaused (1.70s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (78.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3984859114 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3984859114 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (35.547308156s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (37.096678084s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.14529227s)
--- PASS: TestRunningBinaryUpgrade (78.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (51.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1635484415 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1635484415 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (15.260285451s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1635484415 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1635484415 -p minikube stop: (23.714472537s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.307232552s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (51.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestKubernetesUpgrade (312.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (32.076825341s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.278221798s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (69.941348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m19.149421423s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (68.001393ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-5859/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-5859/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.274758642s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.134370617s)
--- PASS: TestKubernetesUpgrade (312.12s)

                                                
                                    

Test skip (61/167)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.0/preload-exists 0
14 TestDownloadOnly/v1.31.0/cached-images 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
38 TestAddons/parallel/Olm 0
42 TestAddons/parallel/LocalPath 0
46 TestCertOptions 0
48 TestDockerFlags 0
49 TestForceSystemdFlag 0
50 TestForceSystemdEnv 0
51 TestDockerEnvContainerd 0
52 TestKVMDriverInstallOrUpdate 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
55 TestErrorSpam 0
64 TestFunctional/serial/CacheCmd 0
78 TestFunctional/parallel/MountCmd 0
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
97 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
98 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
99 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
100 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
101 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
102 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
103 TestFunctional/parallel/SSHCmd 0
104 TestFunctional/parallel/CpCmd 0
106 TestFunctional/parallel/FileSync 0
107 TestFunctional/parallel/CertSync 0
112 TestFunctional/parallel/DockerEnv 0
113 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/ImageCommands 0
116 TestFunctional/parallel/NonActiveRuntimeDisabled 0
124 TestGvisorAddon 0
125 TestMultiControlPlane 0
133 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
160 TestKicCustomNetwork 0
161 TestKicExistingNetwork 0
162 TestKicCustomSubnet 0
163 TestKicStaticIP 0
166 TestMountStart 0
167 TestMultiNode 0
168 TestNetworkPlugins 0
169 TestNoKubernetes 0
170 TestChangeNoneUser 0
181 TestPreload 0
182 TestScheduledStopWindows 0
183 TestScheduledStopUnix 0
184 TestSkaffold 0
187 TestStartStop/group/old-k8s-version 0.13
188 TestStartStop/group/newest-cni 0.14
189 TestStartStop/group/default-k8s-diff-port 0.13
190 TestStartStop/group/no-preload 0.13
191 TestStartStop/group/disable-driver-mounts 0.13
192 TestStartStop/group/embed-certs 0.13
193 TestInsufficientStorage 0
200 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:198: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:978: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.13s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard