Test Report: none_Linux 19636

                    
                      a6feba20ebb4dc887776b248ea5c810d31cc7846:2024-09-13:36198
                    
                

Test fail (1/166)

Order failed test Duration
33 TestAddons/parallel/Registry 71.88
x
+
TestAddons/parallel/Registry (71.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.864936ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-qnsqn" [9d207cfe-fc0d-47fe-ae8e-3720eb38b045] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00343036s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-j9v4g" [01e1c35f-1c90-440d-92e7-defa8bfc5517] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003303722s
addons_test.go:338: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.081207839s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/13 18:33:57 [DEBUG] GET http://10.154.0.4:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:37771               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:22 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 13 Sep 24 18:22 UTC | 13 Sep 24 18:22 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 13 Sep 24 18:22 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 13 Sep 24 18:22 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 13 Sep 24 18:22 UTC | 13 Sep 24 18:24 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC | 13 Sep 24 18:24 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:22:10
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:22:10.064496   14328 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:22:10.064756   14328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:22:10.064766   14328 out.go:358] Setting ErrFile to fd 2...
	I0913 18:22:10.064770   14328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:22:10.064945   14328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3707/.minikube/bin
	I0913 18:22:10.065550   14328 out.go:352] Setting JSON to false
	I0913 18:22:10.066420   14328 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":270,"bootTime":1726251460,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:22:10.066516   14328 start.go:139] virtualization: kvm guest
	I0913 18:22:10.068784   14328 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0913 18:22:10.070166   14328 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19636-3707/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 18:22:10.070207   14328 notify.go:220] Checking for updates...
	I0913 18:22:10.070254   14328 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:22:10.071732   14328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:22:10.073500   14328 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3707/kubeconfig
	I0913 18:22:10.075364   14328 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3707/.minikube
	I0913 18:22:10.076802   14328 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:22:10.078024   14328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:22:10.079405   14328 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:22:10.089919   14328 out.go:177] * Using the none driver based on user configuration
	I0913 18:22:10.091401   14328 start.go:297] selected driver: none
	I0913 18:22:10.091419   14328 start.go:901] validating driver "none" against <nil>
	I0913 18:22:10.091440   14328 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:22:10.091492   14328 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0913 18:22:10.091789   14328 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0913 18:22:10.092332   14328 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 18:22:10.092590   14328 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:22:10.092618   14328 cni.go:84] Creating CNI manager for ""
	I0913 18:22:10.092674   14328 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 18:22:10.092688   14328 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 18:22:10.092751   14328 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:22:10.094439   14328 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0913 18:22:10.095977   14328 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/config.json ...
	I0913 18:22:10.096016   14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/config.json: {Name:mkd150c72083440d8af87241650f704d226e0f32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:10.096194   14328 start.go:360] acquireMachinesLock for minikube: {Name:mk1177d6c2a3f835d0a2cf4f02b8ba8a9aa96d82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 18:22:10.096231   14328 start.go:364] duration metric: took 20.076µs to acquireMachinesLock for "minikube"
	I0913 18:22:10.096249   14328 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 18:22:10.096323   14328 start.go:125] createHost starting for "" (driver="none")
	I0913 18:22:10.098023   14328 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0913 18:22:10.099406   14328 exec_runner.go:51] Run: systemctl --version
	I0913 18:22:10.102134   14328 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0913 18:22:10.102184   14328 client.go:168] LocalClient.Create starting
	I0913 18:22:10.102324   14328 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3707/.minikube/certs/ca.pem
	I0913 18:22:10.102371   14328 main.go:141] libmachine: Decoding PEM data...
	I0913 18:22:10.102397   14328 main.go:141] libmachine: Parsing certificate...
	I0913 18:22:10.102470   14328 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3707/.minikube/certs/cert.pem
	I0913 18:22:10.102497   14328 main.go:141] libmachine: Decoding PEM data...
	I0913 18:22:10.102517   14328 main.go:141] libmachine: Parsing certificate...
	I0913 18:22:10.102955   14328 client.go:171] duration metric: took 763.006µs to LocalClient.Create
	I0913 18:22:10.102986   14328 start.go:167] duration metric: took 863.39µs to libmachine.API.Create "minikube"
	I0913 18:22:10.102995   14328 start.go:293] postStartSetup for "minikube" (driver="none")
	I0913 18:22:10.103045   14328 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 18:22:10.103102   14328 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 18:22:10.113010   14328 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0913 18:22:10.113032   14328 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0913 18:22:10.113040   14328 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0913 18:22:10.115177   14328 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0913 18:22:10.116477   14328 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3707/.minikube/addons for local assets ...
	I0913 18:22:10.116538   14328 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3707/.minikube/files for local assets ...
	I0913 18:22:10.116560   14328 start.go:296] duration metric: took 13.555649ms for postStartSetup
	I0913 18:22:10.117178   14328 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/config.json ...
	I0913 18:22:10.117317   14328 start.go:128] duration metric: took 20.983708ms to createHost
	I0913 18:22:10.117330   14328 start.go:83] releasing machines lock for "minikube", held for 21.088992ms
	I0913 18:22:10.117634   14328 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0913 18:22:10.117763   14328 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0913 18:22:10.119676   14328 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 18:22:10.119850   14328 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 18:22:10.130480   14328 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0913 18:22:10.130507   14328 start.go:495] detecting cgroup driver to use...
	I0913 18:22:10.130536   14328 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0913 18:22:10.130625   14328 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:22:10.148681   14328 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0913 18:22:10.157627   14328 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0913 18:22:10.166827   14328 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0913 18:22:10.166886   14328 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0913 18:22:10.176218   14328 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 18:22:10.186633   14328 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0913 18:22:10.195860   14328 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 18:22:10.205493   14328 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 18:22:10.213826   14328 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0913 18:22:10.225497   14328 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0913 18:22:10.236179   14328 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0913 18:22:10.245314   14328 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 18:22:10.252802   14328 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 18:22:10.260772   14328 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0913 18:22:10.474369   14328 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0913 18:22:10.599258   14328 start.go:495] detecting cgroup driver to use...
	I0913 18:22:10.599326   14328 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0913 18:22:10.599433   14328 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 18:22:10.620954   14328 exec_runner.go:51] Run: which cri-dockerd
	I0913 18:22:10.622076   14328 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0913 18:22:10.631656   14328 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0913 18:22:10.631675   14328 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0913 18:22:10.631711   14328 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0913 18:22:10.641079   14328 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0913 18:22:10.641267   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1370012668 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0913 18:22:10.650147   14328 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0913 18:22:10.876081   14328 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0913 18:22:11.091120   14328 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0913 18:22:11.091286   14328 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0913 18:22:11.091301   14328 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0913 18:22:11.091347   14328 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0913 18:22:11.100525   14328 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0913 18:22:11.100695   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2269362476 /etc/docker/daemon.json
	I0913 18:22:11.109304   14328 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0913 18:22:11.336958   14328 exec_runner.go:51] Run: sudo systemctl restart docker
	I0913 18:22:11.748133   14328 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0913 18:22:11.759660   14328 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0913 18:22:11.776010   14328 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 18:22:11.787458   14328 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0913 18:22:12.004519   14328 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0913 18:22:12.232992   14328 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0913 18:22:12.446713   14328 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0913 18:22:12.460609   14328 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 18:22:12.471894   14328 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0913 18:22:12.684169   14328 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0913 18:22:12.753102   14328 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0913 18:22:12.753179   14328 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0913 18:22:12.754553   14328 start.go:563] Will wait 60s for crictl version
	I0913 18:22:12.754601   14328 exec_runner.go:51] Run: which crictl
	I0913 18:22:12.755327   14328 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0913 18:22:12.788236   14328 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0913 18:22:12.788301   14328 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0913 18:22:12.809369   14328 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0913 18:22:12.833558   14328 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0913 18:22:12.833635   14328 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0913 18:22:12.836443   14328 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0913 18:22:12.837616   14328 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 18:22:12.837724   14328 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 18:22:12.837735   14328 kubeadm.go:934] updating node { 10.154.0.4 8443 v1.31.1 docker true true} ...
	I0913 18:22:12.837808   14328 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-9 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.154.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0913 18:22:12.837850   14328 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0913 18:22:12.886148   14328 cni.go:84] Creating CNI manager for ""
	I0913 18:22:12.886173   14328 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 18:22:12.886185   14328 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 18:22:12.886210   14328 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.154.0.4 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-9 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.154.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.154.0.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 18:22:12.886379   14328 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.154.0.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-9"
	  kubeletExtraArgs:
	    node-ip: 10.154.0.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.154.0.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 18:22:12.886446   14328 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 18:22:12.895363   14328 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0913 18:22:12.895421   14328 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0913 18:22:12.904586   14328 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0913 18:22:12.904586   14328 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0913 18:22:12.904631   14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0913 18:22:12.904618   14328 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0913 18:22:12.904673   14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0913 18:22:12.904710   14328 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:22:12.917826   14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0913 18:22:12.955087   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube26677164 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 18:22:12.968915   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2207779581 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 18:22:12.998652   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3904692067 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 18:22:13.063653   14328 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 18:22:13.072336   14328 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0913 18:22:13.072357   14328 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0913 18:22:13.072396   14328 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0913 18:22:13.080405   14328 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0913 18:22:13.080557   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1860911824 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0913 18:22:13.088934   14328 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0913 18:22:13.088953   14328 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0913 18:22:13.089000   14328 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0913 18:22:13.099087   14328 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 18:22:13.099293   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2025006050 /lib/systemd/system/kubelet.service
	I0913 18:22:13.108153   14328 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0913 18:22:13.108311   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube904266075 /var/tmp/minikube/kubeadm.yaml.new
	I0913 18:22:13.117067   14328 exec_runner.go:51] Run: grep 10.154.0.4	control-plane.minikube.internal$ /etc/hosts
	I0913 18:22:13.118458   14328 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0913 18:22:13.356904   14328 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0913 18:22:13.372081   14328 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube for IP: 10.154.0.4
	I0913 18:22:13.372104   14328 certs.go:194] generating shared ca certs ...
	I0913 18:22:13.372122   14328 certs.go:226] acquiring lock for ca certs: {Name:mk785798fbcf81959753f3319707a0af9d7664a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:13.372244   14328 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3707/.minikube/ca.key
	I0913 18:22:13.372280   14328 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3707/.minikube/proxy-client-ca.key
	I0913 18:22:13.372288   14328 certs.go:256] generating profile certs ...
	I0913 18:22:13.372336   14328 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.key
	I0913 18:22:13.372357   14328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt with IP's: []
	I0913 18:22:13.472121   14328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt ...
	I0913 18:22:13.472150   14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt: {Name:mka4c529ecd82dac1d339ecc23df92e7a4b5760a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:13.472277   14328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.key ...
	I0913 18:22:13.472288   14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.key: {Name:mkf3ba81e5c3f62106e3bc734fad28e18b450c94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:13.472383   14328 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.key.1b9420d6
	I0913 18:22:13.472399   14328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.crt.1b9420d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.154.0.4]
	I0913 18:22:13.720539   14328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.crt.1b9420d6 ...
	I0913 18:22:13.720572   14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.crt.1b9420d6: {Name:mk5d0e0561d9e7c5ac5b0fdadeb29312aa6ba98c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:13.720719   14328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.key.1b9420d6 ...
	I0913 18:22:13.720731   14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.key.1b9420d6: {Name:mka43a7eac85ae832b2eead542b6c36553cf716b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:13.720782   14328 certs.go:381] copying /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.crt.1b9420d6 -> /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.crt
	I0913 18:22:13.720853   14328 certs.go:385] copying /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.key.1b9420d6 -> /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.key
	I0913 18:22:13.720903   14328 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.key
	I0913 18:22:13.720917   14328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0913 18:22:14.033549   14328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.crt ...
	I0913 18:22:14.033579   14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.crt: {Name:mk71d7e14da716aec8f7fbf2afe69fe41263189b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:14.033719   14328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.key ...
	I0913 18:22:14.033729   14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.key: {Name:mkf9e7d94dfc211aa294d98fdcb9b5236622ab84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:14.033875   14328 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3707/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 18:22:14.033908   14328 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3707/.minikube/certs/ca.pem (1078 bytes)
	I0913 18:22:14.033930   14328 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3707/.minikube/certs/cert.pem (1123 bytes)
	I0913 18:22:14.033950   14328 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3707/.minikube/certs/key.pem (1679 bytes)
	I0913 18:22:14.034512   14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 18:22:14.034634   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1013393034 /var/lib/minikube/certs/ca.crt
	I0913 18:22:14.043563   14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 18:22:14.043688   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1187429901 /var/lib/minikube/certs/ca.key
	I0913 18:22:14.052884   14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 18:22:14.053014   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1829648559 /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 18:22:14.060896   14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 18:22:14.061034   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4232937632 /var/lib/minikube/certs/proxy-client-ca.key
	I0913 18:22:14.070657   14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0913 18:22:14.070784   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4007334980 /var/lib/minikube/certs/apiserver.crt
	I0913 18:22:14.079321   14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 18:22:14.079434   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1181490233 /var/lib/minikube/certs/apiserver.key
	I0913 18:22:14.088981   14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 18:22:14.089090   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3362529697 /var/lib/minikube/certs/proxy-client.crt
	I0913 18:22:14.097305   14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 18:22:14.097426   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4137465544 /var/lib/minikube/certs/proxy-client.key
	I0913 18:22:14.105511   14328 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0913 18:22:14.105534   14328 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:22:14.105570   14328 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:22:14.113102   14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 18:22:14.113243   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4179127001 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:22:14.121481   14328 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 18:22:14.121594   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1380107211 /var/lib/minikube/kubeconfig
	I0913 18:22:14.129827   14328 exec_runner.go:51] Run: openssl version
	I0913 18:22:14.132478   14328 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 18:22:14.141222   14328 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:22:14.142509   14328 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:22:14.142550   14328 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 18:22:14.145249   14328 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 18:22:14.153446   14328 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 18:22:14.154522   14328 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 18:22:14.154558   14328 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:22:14.154670   14328 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 18:22:14.169618   14328 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 18:22:14.178435   14328 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 18:22:14.187095   14328 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0913 18:22:14.209512   14328 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 18:22:14.218144   14328 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 18:22:14.218164   14328 kubeadm.go:157] found existing configuration files:
	
	I0913 18:22:14.218205   14328 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 18:22:14.226329   14328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 18:22:14.226384   14328 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 18:22:14.233697   14328 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 18:22:14.242804   14328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 18:22:14.242861   14328 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 18:22:14.250768   14328 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 18:22:14.260192   14328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 18:22:14.260253   14328 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 18:22:14.269276   14328 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 18:22:14.278504   14328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 18:22:14.278559   14328 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 18:22:14.286865   14328 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 18:22:14.320061   14328 kubeadm.go:310] W0913 18:22:14.319926   15232 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:22:14.320526   14328 kubeadm.go:310] W0913 18:22:14.320475   15232 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 18:22:14.322067   14328 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 18:22:14.322103   14328 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 18:22:14.413966   14328 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 18:22:14.414091   14328 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 18:22:14.414108   14328 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 18:22:14.414116   14328 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 18:22:14.424391   14328 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 18:22:14.427855   14328 out.go:235]   - Generating certificates and keys ...
	I0913 18:22:14.427902   14328 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 18:22:14.427918   14328 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 18:22:14.624930   14328 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 18:22:15.040758   14328 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 18:22:15.162408   14328 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 18:22:15.658152   14328 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 18:22:16.025339   14328 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 18:22:16.025416   14328 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
	I0913 18:22:16.120885   14328 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 18:22:16.120989   14328 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
	I0913 18:22:16.372802   14328 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 18:22:16.510885   14328 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 18:22:16.945085   14328 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 18:22:16.945228   14328 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 18:22:17.084123   14328 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 18:22:17.300113   14328 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 18:22:17.553031   14328 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 18:22:17.682710   14328 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 18:22:17.780326   14328 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 18:22:17.780872   14328 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 18:22:17.783119   14328 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 18:22:17.785707   14328 out.go:235]   - Booting up control plane ...
	I0913 18:22:17.785740   14328 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 18:22:17.785762   14328 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 18:22:17.786072   14328 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 18:22:17.808012   14328 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 18:22:17.812401   14328 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 18:22:17.812425   14328 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 18:22:18.061184   14328 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 18:22:18.061207   14328 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 18:22:19.062988   14328 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001791222s
	I0913 18:22:19.063011   14328 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 18:22:23.564817   14328 kubeadm.go:310] [api-check] The API server is healthy after 4.501698381s
	I0913 18:22:23.577028   14328 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 18:22:23.587239   14328 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 18:22:23.607756   14328 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 18:22:23.607779   14328 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-9 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 18:22:23.617294   14328 kubeadm.go:310] [bootstrap-token] Using token: sldh4y.yuhm3u7inwrmozvf
	I0913 18:22:23.618670   14328 out.go:235]   - Configuring RBAC rules ...
	I0913 18:22:23.618699   14328 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 18:22:23.622263   14328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 18:22:23.627835   14328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 18:22:23.631523   14328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 18:22:23.634289   14328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 18:22:23.636831   14328 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 18:22:23.971592   14328 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 18:22:24.404971   14328 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 18:22:24.972154   14328 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 18:22:24.973197   14328 kubeadm.go:310] 
	I0913 18:22:24.973216   14328 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 18:22:24.973221   14328 kubeadm.go:310] 
	I0913 18:22:24.973226   14328 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 18:22:24.973237   14328 kubeadm.go:310] 
	I0913 18:22:24.973241   14328 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 18:22:24.973245   14328 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 18:22:24.973258   14328 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 18:22:24.973262   14328 kubeadm.go:310] 
	I0913 18:22:24.973266   14328 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 18:22:24.973271   14328 kubeadm.go:310] 
	I0913 18:22:24.973276   14328 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 18:22:24.973284   14328 kubeadm.go:310] 
	I0913 18:22:24.973288   14328 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 18:22:24.973295   14328 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 18:22:24.973300   14328 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 18:22:24.973307   14328 kubeadm.go:310] 
	I0913 18:22:24.973313   14328 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 18:22:24.973319   14328 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 18:22:24.973328   14328 kubeadm.go:310] 
	I0913 18:22:24.973335   14328 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sldh4y.yuhm3u7inwrmozvf \
	I0913 18:22:24.973341   14328 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:961bd654f095ef1a6d147d71a918dd0b71b1322a66f9fb78ac53da26dd6c0c4c \
	I0913 18:22:24.973345   14328 kubeadm.go:310] 	--control-plane 
	I0913 18:22:24.973350   14328 kubeadm.go:310] 
	I0913 18:22:24.973357   14328 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 18:22:24.973361   14328 kubeadm.go:310] 
	I0913 18:22:24.973368   14328 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sldh4y.yuhm3u7inwrmozvf \
	I0913 18:22:24.973373   14328 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:961bd654f095ef1a6d147d71a918dd0b71b1322a66f9fb78ac53da26dd6c0c4c 
	I0913 18:22:24.976295   14328 cni.go:84] Creating CNI manager for ""
	I0913 18:22:24.976324   14328 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 18:22:24.979363   14328 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 18:22:24.980696   14328 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0913 18:22:24.991712   14328 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 18:22:24.991877   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3692986788 /etc/cni/net.d/1-k8s.conflist
	I0913 18:22:25.004762   14328 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 18:22:25.004824   14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:25.004885   14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-9 minikube.k8s.io/updated_at=2024_09_13T18_22_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0913 18:22:25.013361   14328 ops.go:34] apiserver oom_adj: -16
	I0913 18:22:25.082888   14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:25.583265   14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:26.083471   14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:26.583482   14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:27.082955   14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:27.582949   14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:28.083147   14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:28.583108   14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:29.083583   14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:29.583491   14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 18:22:29.673230   14328 kubeadm.go:1113] duration metric: took 4.668457432s to wait for elevateKubeSystemPrivileges
	I0913 18:22:29.673261   14328 kubeadm.go:394] duration metric: took 15.518707703s to StartCluster
	I0913 18:22:29.673282   14328 settings.go:142] acquiring lock: {Name:mk98196c8c447c4d1ddda32c1e2d671af91b86c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:29.673336   14328 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19636-3707/kubeconfig
	I0913 18:22:29.673909   14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/kubeconfig: {Name:mk8dbe36e5fbf6af14c0274573a74465da65b6cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 18:22:29.674107   14328 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 18:22:29.674189   14328 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0913 18:22:29.674323   14328 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0913 18:22:29.674337   14328 addons.go:69] Setting yakd=true in profile "minikube"
	I0913 18:22:29.674351   14328 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0913 18:22:29.674356   14328 addons.go:234] Setting addon yakd=true in "minikube"
	I0913 18:22:29.674358   14328 addons.go:69] Setting registry=true in profile "minikube"
	I0913 18:22:29.674372   14328 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0913 18:22:29.674389   14328 host.go:66] Checking if "minikube" exists ...
	I0913 18:22:29.674385   14328 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:22:29.674398   14328 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0913 18:22:29.674407   14328 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0913 18:22:29.674417   14328 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0913 18:22:29.674427   14328 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0913 18:22:29.674429   14328 host.go:66] Checking if "minikube" exists ...
	I0913 18:22:29.674431   14328 addons.go:69] Setting volcano=true in profile "minikube"
	I0913 18:22:29.674439   14328 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0913 18:22:29.674440   14328 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0913 18:22:29.674446   14328 addons.go:234] Setting addon volcano=true in "minikube"
	I0913 18:22:29.674454   14328 mustload.go:65] Loading cluster: minikube
	I0913 18:22:29.674469   14328 host.go:66] Checking if "minikube" exists ...
	I0913 18:22:29.674472   14328 host.go:66] Checking if "minikube" exists ...
	I0913 18:22:29.674631   14328 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:22:29.674663   14328 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0913 18:22:29.674696   14328 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0913 18:22:29.674721   14328 host.go:66] Checking if "minikube" exists ...
	I0913 18:22:29.674968   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:29.674983   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:29.675016   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:29.675090   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:29.675094   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:29.675094   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:29.675109   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:29.675111   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:29.675110   14328 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0913 18:22:29.675118   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:29.675123   14328 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0913 18:22:29.675130   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:29.675142   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:29.674419   14328 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0913 18:22:29.675145   14328 host.go:66] Checking if "minikube" exists ...
	I0913 18:22:29.675158   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:29.675165   14328 host.go:66] Checking if "minikube" exists ...
	I0913 18:22:29.675290   14328 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0913 18:22:29.675323   14328 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0913 18:22:29.675346   14328 host.go:66] Checking if "minikube" exists ...
	I0913 18:22:29.675660   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:29.675675   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:29.675707   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:29.675760   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:29.675771   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:29.675800   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:29.675800   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:29.675813   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:29.675846   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:29.674398   14328 addons.go:234] Setting addon registry=true in "minikube"
	I0913 18:22:29.675946   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:29.675962   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:29.675976   14328 host.go:66] Checking if "minikube" exists ...
	I0913 18:22:29.675994   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:29.674407   14328 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0913 18:22:29.676143   14328 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0913 18:22:29.675142   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:29.676313   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:29.676327   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:29.676357   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:29.674430   14328 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0913 18:22:29.676457   14328 host.go:66] Checking if "minikube" exists ...
	I0913 18:22:29.676744   14328 out.go:177] * Configuring local host environment ...
	I0913 18:22:29.677068   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:29.677089   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:29.677117   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0913 18:22:29.678452   14328 out.go:270] * 
	W0913 18:22:29.678479   14328 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0913 18:22:29.678493   14328 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0913 18:22:29.678501   14328 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0913 18:22:29.678514   14328 out.go:270] * 
	W0913 18:22:29.678567   14328 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0913 18:22:29.678581   14328 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0913 18:22:29.678593   14328 out.go:270] * 
	W0913 18:22:29.678619   14328 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0913 18:22:29.678630   14328 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0913 18:22:29.678641   14328 out.go:270] * 
	W0913 18:22:29.678657   14328 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0913 18:22:29.678692   14328 start.go:235] Will wait 6m0s for node &{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 18:22:29.675101   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:29.679264   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:29.680160   14328 out.go:177] * Verifying Kubernetes components...
	I0913 18:22:29.682184   14328 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0913 18:22:29.699527   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:29.699591   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:29.699614   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:29.699651   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:29.699909   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:29.699940   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:29.699972   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:29.702358   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:29.718121   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:29.718320   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:29.720220   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:29.720272   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:29.720330   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:29.721701   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:29.721829   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:29.721894   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:29.723328   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:29.738305   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:29.740079   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:29.740105   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:29.740264   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:29.740310   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:29.740549   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:29.741939   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:29.744151   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:29.744173   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:29.745721   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:29.746625   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:29.749445   14328 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0913 18:22:29.749491   14328 host.go:66] Checking if "minikube" exists ...
	I0913 18:22:29.750172   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:29.750189   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:29.750224   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:29.751303   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:29.752227   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:29.752287   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:29.752726   14328 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0913 18:22:29.753915   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:29.753937   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:29.755074   14328 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0913 18:22:29.755127   14328 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0913 18:22:29.755313   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2808512143 /etc/kubernetes/addons/ig-namespace.yaml
	I0913 18:22:29.758167   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:29.758224   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:29.758953   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:29.759718   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:29.759773   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:29.760908   14328 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0913 18:22:29.764795   14328 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0913 18:22:29.767002   14328 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0913 18:22:29.770013   14328 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 18:22:29.770054   14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0913 18:22:29.770643   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2613983426 /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 18:22:29.771865   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:29.771928   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:29.777015   14328 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 18:22:29.777041   14328 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0913 18:22:29.777135   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube113624407 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 18:22:29.777001   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:29.777304   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:29.783109   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:29.783058   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:29.783205   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:29.783433   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:29.783456   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:29.784118   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:29.784136   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:29.784661   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:29.793438   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:29.796158   14328 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0913 18:22:29.799450   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:29.799478   14328 host.go:66] Checking if "minikube" exists ...
	I0913 18:22:29.799632   14328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 18:22:29.799680   14328 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0913 18:22:29.799842   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube44809800 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 18:22:29.800806   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:29.811311   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:29.811376   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:29.811562   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:29.811603   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:29.811907   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:29.811932   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:29.812234   14328 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0913 18:22:29.812256   14328 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0913 18:22:29.812383   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1971748866 /etc/kubernetes/addons/ig-role.yaml
	I0913 18:22:29.818810   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:29.818835   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:29.819179   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:29.819199   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:29.820116   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:29.820164   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:29.820657   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:29.820677   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:29.821351   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:29.821396   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:29.823174   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:29.823624   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:29.823641   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:29.825119   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:29.826139   14328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0913 18:22:29.826376   14328 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0913 18:22:29.826428   14328 host.go:66] Checking if "minikube" exists ...
	I0913 18:22:29.827250   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:29.827273   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:29.827384   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:29.827824   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:29.829170   14328 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 18:22:29.829203   14328 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0913 18:22:29.829332   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2520373827 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 18:22:29.829402   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:29.829968   14328 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0913 18:22:29.830008   14328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0913 18:22:29.830750   14328 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0913 18:22:29.831068   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:29.832230   14328 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 18:22:29.832261   14328 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 18:22:29.832398   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2880040134 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 18:22:29.832914   14328 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0913 18:22:29.833025   14328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0913 18:22:29.837878   14328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0913 18:22:29.837907   14328 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 18:22:29.837940   14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0913 18:22:29.838098   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4059801455 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 18:22:29.838102   14328 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0913 18:22:29.838140   14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0913 18:22:29.838163   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:29.838182   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:29.838597   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1727490139 /etc/kubernetes/addons/deployment.yaml
	I0913 18:22:29.839854   14328 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 18:22:29.839889   14328 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0913 18:22:29.840006   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2728120261 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 18:22:29.843555   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:29.845319   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:29.845346   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:29.848348   14328 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0913 18:22:29.848342   14328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0913 18:22:29.848824   14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 18:22:29.850237   14328 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0913 18:22:29.850383   14328 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0913 18:22:29.850547   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3118347710 /etc/kubernetes/addons/yakd-ns.yaml
	I0913 18:22:29.852079   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:29.852103   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:29.852444   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:29.852555   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:29.852676   14328 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 18:22:29.852698   14328 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0913 18:22:29.852834   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1090503355 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 18:22:29.853134   14328 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0913 18:22:29.854241   14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0913 18:22:29.854798   14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 18:22:29.856295   14328 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0913 18:22:29.856366   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:29.856431   14328 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 18:22:29.856449   14328 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0913 18:22:29.856455   14328 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0913 18:22:29.856493   14328 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0913 18:22:29.856774   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:29.858647   14328 out.go:177]   - Using image docker.io/registry:2.8.3
	I0913 18:22:29.858736   14328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0913 18:22:29.860307   14328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 18:22:29.860336   14328 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0913 18:22:29.860488   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube109296710 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 18:22:29.862262   14328 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0913 18:22:29.863986   14328 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0913 18:22:29.864013   14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0913 18:22:29.864135   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1998936989 /etc/kubernetes/addons/registry-rc.yaml
	I0913 18:22:29.868672   14328 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 18:22:29.868695   14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0913 18:22:29.868795   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1854299642 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 18:22:29.871825   14328 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 18:22:29.871860   14328 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0913 18:22:29.872002   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3505779699 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 18:22:29.873335   14328 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 18:22:29.873350   14328 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0913 18:22:29.873427   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2794437994 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 18:22:29.874817   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:29.874833   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:29.875546   14328 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 18:22:29.875698   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3603856914 /etc/kubernetes/addons/storageclass.yaml
	I0913 18:22:29.876623   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:29.881186   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:29.881601   14328 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0913 18:22:29.881633   14328 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0913 18:22:29.881787   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3111377881 /etc/kubernetes/addons/yakd-sa.yaml
	I0913 18:22:29.883741   14328 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 18:22:29.885458   14328 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:22:29.885485   14328 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0913 18:22:29.885495   14328 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:22:29.885539   14328 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:22:29.891231   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:29.891298   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:29.891702   14328 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 18:22:29.891725   14328 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 18:22:29.891826   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1657859118 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 18:22:29.893902   14328 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 18:22:29.893929   14328 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0913 18:22:29.894050   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3971364297 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 18:22:29.898824   14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 18:22:29.899207   14328 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0913 18:22:29.899238   14328 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0913 18:22:29.899605   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3656227679 /etc/kubernetes/addons/registry-svc.yaml
	I0913 18:22:29.908396   14328 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 18:22:29.908423   14328 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 18:22:29.908536   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2518243723 /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 18:22:29.919140   14328 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0913 18:22:29.919183   14328 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0913 18:22:29.919326   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2784403250 /etc/kubernetes/addons/ig-crd.yaml
	I0913 18:22:29.924818   14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 18:22:29.924983   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1732008546 /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:22:29.929503   14328 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 18:22:29.929542   14328 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0913 18:22:29.929691   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2661398203 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 18:22:29.936251   14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 18:22:29.941570   14328 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 18:22:29.943109   14328 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 18:22:29.943139   14328 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0913 18:22:29.943276   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1692137981 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 18:22:29.952268   14328 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0913 18:22:29.952301   14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0913 18:22:29.952396   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3269589089 /etc/kubernetes/addons/registry-proxy.yaml
	I0913 18:22:29.954778   14328 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0913 18:22:29.954820   14328 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0913 18:22:29.954962   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1867590320 /etc/kubernetes/addons/yakd-crb.yaml
	I0913 18:22:29.955551   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:29.955581   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:29.962617   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:29.968110   14328 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 18:22:29.968139   14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0913 18:22:29.968281   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2969284621 /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 18:22:29.970354   14328 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0913 18:22:29.974802   14328 out.go:177]   - Using image docker.io/busybox:stable
	I0913 18:22:29.975865   14328 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:29.975895   14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0913 18:22:29.976023   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube64902857 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:29.977091   14328 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 18:22:29.977119   14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0913 18:22:29.977211   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2207215465 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 18:22:29.977313   14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0913 18:22:29.979394   14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 18:22:29.993996   14328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 18:22:29.994035   14328 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0913 18:22:29.994163   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1945848953 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 18:22:30.037170   14328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 18:22:30.037203   14328 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0913 18:22:30.037324   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3679622764 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 18:22:30.051190   14328 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0913 18:22:30.051225   14328 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0913 18:22:30.051349   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube473149968 /etc/kubernetes/addons/yakd-svc.yaml
	I0913 18:22:30.061205   14328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 18:22:30.061239   14328 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0913 18:22:30.061467   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2654012100 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 18:22:30.082521   14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 18:22:30.088061   14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 18:22:30.100641   14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:30.114969   14328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 18:22:30.115006   14328 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0913 18:22:30.115139   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1771831967 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 18:22:30.152246   14328 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0913 18:22:30.157259   14328 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0913 18:22:30.157293   14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0913 18:22:30.157429   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3908865038 /etc/kubernetes/addons/yakd-dp.yaml
	I0913 18:22:30.200183   14328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 18:22:30.200220   14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0913 18:22:30.200369   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1603298593 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 18:22:30.209308   14328 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-9" to be "Ready" ...
	I0913 18:22:30.212211   14328 node_ready.go:49] node "ubuntu-20-agent-9" has status "Ready":"True"
	I0913 18:22:30.212238   14328 node_ready.go:38] duration metric: took 2.87252ms for node "ubuntu-20-agent-9" to be "Ready" ...
	I0913 18:22:30.212250   14328 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:22:30.223084   14328 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dzc9p" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:30.288261   14328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 18:22:30.288320   14328 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0913 18:22:30.288471   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2900606559 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 18:22:30.307848   14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0913 18:22:30.422099   14328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 18:22:30.422145   14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0913 18:22:30.425414   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3801295454 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 18:22:30.480008   14328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 18:22:30.480045   14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0913 18:22:30.480171   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube825372909 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 18:22:30.554465   14328 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0913 18:22:30.655787   14328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 18:22:30.655838   14328 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0913 18:22:30.658113   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2424578108 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 18:22:30.722736   14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 18:22:30.990963   14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.054659565s)
	I0913 18:22:30.991005   14328 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0913 18:22:31.064684   14328 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0913 18:22:31.212697   14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.233268842s)
	I0913 18:22:31.274282   14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.296884817s)
	I0913 18:22:31.274317   14328 addons.go:475] Verifying addon registry=true in "minikube"
	I0913 18:22:31.276597   14328 out.go:177] * Verifying registry addon...
	I0913 18:22:31.282829   14328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0913 18:22:31.293202   14328 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0913 18:22:31.293223   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:31.332432   14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.249847536s)
	I0913 18:22:31.370055   14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.28194539s)
	I0913 18:22:31.439304   14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.131348016s)
	I0913 18:22:31.443702   14328 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0913 18:22:31.802504   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:31.860313   14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.75955483s)
	W0913 18:22:31.860352   14328 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 18:22:31.860376   14328 retry.go:31] will retry after 330.814391ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 18:22:32.191905   14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 18:22:32.234247   14328 pod_ready.go:103] pod "coredns-7c65d6cfc9-dzc9p" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:32.287583   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:32.786972   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:32.927699   14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.078827355s)
	I0913 18:22:33.288515   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:33.336483   14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.613663253s)
	I0913 18:22:33.336527   14328 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0913 18:22:33.338137   14328 out.go:177] * Verifying csi-hostpath-driver addon...
	I0913 18:22:33.340404   14328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0913 18:22:33.358755   14328 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0913 18:22:33.358783   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:33.786169   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:33.888731   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:34.287866   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:34.388956   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:34.728388   14328 pod_ready.go:103] pod "coredns-7c65d6cfc9-dzc9p" in "kube-system" namespace has status "Ready":"False"
	I0913 18:22:34.786725   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:34.889049   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:35.153174   14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.961219478s)
	I0913 18:22:35.287092   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:35.345937   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:35.729584   14328 pod_ready.go:93] pod "coredns-7c65d6cfc9-dzc9p" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:35.729610   14328 pod_ready.go:82] duration metric: took 5.506492573s for pod "coredns-7c65d6cfc9-dzc9p" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:35.729623   14328 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-w786s" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:35.787650   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:35.889303   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:36.287011   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:36.417030   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:36.786616   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:36.809096   14328 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0913 18:22:36.809245   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube950786621 /var/lib/minikube/google_application_credentials.json
	I0913 18:22:36.819682   14328 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0913 18:22:36.819796   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2349982424 /var/lib/minikube/google_cloud_project
	I0913 18:22:36.831179   14328 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0913 18:22:36.831238   14328 host.go:66] Checking if "minikube" exists ...
	I0913 18:22:36.831957   14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0913 18:22:36.831987   14328 api_server.go:166] Checking apiserver status ...
	I0913 18:22:36.832034   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:36.849525   14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
	I0913 18:22:36.862299   14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
	I0913 18:22:36.862365   14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
	I0913 18:22:36.871948   14328 api_server.go:204] freezer state: "THAWED"
	I0913 18:22:36.871974   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:36.876389   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:36.876447   14328 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0913 18:22:36.879649   14328 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 18:22:36.881406   14328 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0913 18:22:36.883224   14328 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 18:22:36.883279   14328 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0913 18:22:36.883448   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3119801814 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 18:22:36.888205   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:36.893035   14328 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 18:22:36.893069   14328 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0913 18:22:36.893168   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3411136400 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 18:22:36.905142   14328 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 18:22:36.905171   14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0913 18:22:36.905300   14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1474031396 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 18:22:36.915488   14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 18:22:37.252033   14328 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0913 18:22:37.253666   14328 out.go:177] * Verifying gcp-auth addon...
	I0913 18:22:37.256155   14328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0913 18:22:37.259244   14328 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 18:22:37.286801   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:37.360859   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:37.732737   14328 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-w786s" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-w786s" not found
	I0913 18:22:37.732763   14328 pod_ready.go:82] duration metric: took 2.003132575s for pod "coredns-7c65d6cfc9-w786s" in "kube-system" namespace to be "Ready" ...
	E0913 18:22:37.732773   14328 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-w786s" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-w786s" not found
	I0913 18:22:37.732780   14328 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:37.736847   14328 pod_ready.go:93] pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:37.736865   14328 pod_ready.go:82] duration metric: took 4.07971ms for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:37.736874   14328 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:37.740513   14328 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:37.740530   14328 pod_ready.go:82] duration metric: took 3.650368ms for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:37.740541   14328 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:37.787033   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:37.844421   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:38.288087   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:38.360794   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:38.786242   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:38.845617   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:39.246503   14328 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.246531   14328 pod_ready.go:82] duration metric: took 1.505980997s for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.246544   14328 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7h9jz" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.251115   14328 pod_ready.go:93] pod "kube-proxy-7h9jz" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.251133   14328 pod_ready.go:82] duration metric: took 4.581785ms for pod "kube-proxy-7h9jz" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.251142   14328 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.255127   14328 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0913 18:22:39.255150   14328 pod_ready.go:82] duration metric: took 3.998545ms for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0913 18:22:39.255159   14328 pod_ready.go:39] duration metric: took 9.042897978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 18:22:39.255181   14328 api_server.go:52] waiting for apiserver process to appear ...
	I0913 18:22:39.255237   14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 18:22:39.276986   14328 api_server.go:72] duration metric: took 9.598254421s to wait for apiserver process to appear ...
	I0913 18:22:39.277015   14328 api_server.go:88] waiting for apiserver healthz status ...
	I0913 18:22:39.277037   14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0913 18:22:39.281385   14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0913 18:22:39.282315   14328 api_server.go:141] control plane version: v1.31.1
	I0913 18:22:39.282345   14328 api_server.go:131] duration metric: took 5.322484ms to wait for apiserver health ...
	I0913 18:22:39.282355   14328 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 18:22:39.286895   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:39.339926   14328 system_pods.go:59] 16 kube-system pods found
	I0913 18:22:39.339954   14328 system_pods.go:61] "coredns-7c65d6cfc9-dzc9p" [64712751-8105-4d5b-86b8-5bd2782e3bd9] Running
	I0913 18:22:39.339965   14328 system_pods.go:61] "csi-hostpath-attacher-0" [4ec23112-7b72-4bfd-8ff6-973e3b964990] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 18:22:39.339973   14328 system_pods.go:61] "csi-hostpath-resizer-0" [57bae9b9-3a16-482a-b527-3e8596fe036a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 18:22:39.339984   14328 system_pods.go:61] "csi-hostpathplugin-7rh6q" [9d9397e0-4a8e-4f8e-82f3-6db78c8f1dc7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 18:22:39.339990   14328 system_pods.go:61] "etcd-ubuntu-20-agent-9" [3ccd7eaf-cf0f-432d-9934-c13ed80108c6] Running
	I0913 18:22:39.339998   14328 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-9" [939565f6-c84a-48c8-93b6-e34c15288f83] Running
	I0913 18:22:39.340008   14328 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-9" [cbe48c36-d55f-4a82-b1aa-3fd1ce21253c] Running
	I0913 18:22:39.340016   14328 system_pods.go:61] "kube-proxy-7h9jz" [52317402-eb63-48c9-8336-46a0844b829a] Running
	I0913 18:22:39.340022   14328 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-9" [b80efbfb-025f-46ca-9dd2-05a697e7f31b] Running
	I0913 18:22:39.340033   14328 system_pods.go:61] "metrics-server-84c5f94fbc-lkmcp" [5492915c-f03f-42c5-aae6-2a86f778d2cc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 18:22:39.340046   14328 system_pods.go:61] "nvidia-device-plugin-daemonset-4lxnd" [4a7fb3ca-f619-4ff0-9c91-dff0f066b225] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0913 18:22:39.340058   14328 system_pods.go:61] "registry-66c9cd494c-qnsqn" [9d207cfe-fc0d-47fe-ae8e-3720eb38b045] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 18:22:39.340069   14328 system_pods.go:61] "registry-proxy-j9v4g" [01e1c35f-1c90-440d-92e7-defa8bfc5517] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 18:22:39.340077   14328 system_pods.go:61] "snapshot-controller-56fcc65765-m7zlt" [225ec73c-c647-4222-980d-449e0b3cdd5f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:39.340086   14328 system_pods.go:61] "snapshot-controller-56fcc65765-s9gr4" [d9c3fead-79ec-46ad-988c-a79adb6ce2fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:39.340091   14328 system_pods.go:61] "storage-provisioner" [2d994015-e9ee-437f-9a93-f03abeb1e209] Running
	I0913 18:22:39.340099   14328 system_pods.go:74] duration metric: took 57.737104ms to wait for pod list to return data ...
	I0913 18:22:39.340106   14328 default_sa.go:34] waiting for default service account to be created ...
	I0913 18:22:39.345265   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:39.534610   14328 default_sa.go:45] found service account: "default"
	I0913 18:22:39.534638   14328 default_sa.go:55] duration metric: took 194.525056ms for default service account to be created ...
	I0913 18:22:39.534650   14328 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 18:22:39.739710   14328 system_pods.go:86] 16 kube-system pods found
	I0913 18:22:39.739738   14328 system_pods.go:89] "coredns-7c65d6cfc9-dzc9p" [64712751-8105-4d5b-86b8-5bd2782e3bd9] Running
	I0913 18:22:39.739749   14328 system_pods.go:89] "csi-hostpath-attacher-0" [4ec23112-7b72-4bfd-8ff6-973e3b964990] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 18:22:39.739757   14328 system_pods.go:89] "csi-hostpath-resizer-0" [57bae9b9-3a16-482a-b527-3e8596fe036a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 18:22:39.739767   14328 system_pods.go:89] "csi-hostpathplugin-7rh6q" [9d9397e0-4a8e-4f8e-82f3-6db78c8f1dc7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 18:22:39.739773   14328 system_pods.go:89] "etcd-ubuntu-20-agent-9" [3ccd7eaf-cf0f-432d-9934-c13ed80108c6] Running
	I0913 18:22:39.739782   14328 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [939565f6-c84a-48c8-93b6-e34c15288f83] Running
	I0913 18:22:39.739790   14328 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [cbe48c36-d55f-4a82-b1aa-3fd1ce21253c] Running
	I0913 18:22:39.739800   14328 system_pods.go:89] "kube-proxy-7h9jz" [52317402-eb63-48c9-8336-46a0844b829a] Running
	I0913 18:22:39.739806   14328 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [b80efbfb-025f-46ca-9dd2-05a697e7f31b] Running
	I0913 18:22:39.739818   14328 system_pods.go:89] "metrics-server-84c5f94fbc-lkmcp" [5492915c-f03f-42c5-aae6-2a86f778d2cc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 18:22:39.739832   14328 system_pods.go:89] "nvidia-device-plugin-daemonset-4lxnd" [4a7fb3ca-f619-4ff0-9c91-dff0f066b225] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0913 18:22:39.739845   14328 system_pods.go:89] "registry-66c9cd494c-qnsqn" [9d207cfe-fc0d-47fe-ae8e-3720eb38b045] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 18:22:39.739857   14328 system_pods.go:89] "registry-proxy-j9v4g" [01e1c35f-1c90-440d-92e7-defa8bfc5517] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 18:22:39.739868   14328 system_pods.go:89] "snapshot-controller-56fcc65765-m7zlt" [225ec73c-c647-4222-980d-449e0b3cdd5f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:39.739879   14328 system_pods.go:89] "snapshot-controller-56fcc65765-s9gr4" [d9c3fead-79ec-46ad-988c-a79adb6ce2fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 18:22:39.739891   14328 system_pods.go:89] "storage-provisioner" [2d994015-e9ee-437f-9a93-f03abeb1e209] Running
	I0913 18:22:39.739904   14328 system_pods.go:126] duration metric: took 205.246456ms to wait for k8s-apps to be running ...
	I0913 18:22:39.739917   14328 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 18:22:39.739971   14328 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0913 18:22:39.755898   14328 system_svc.go:56] duration metric: took 15.9653ms WaitForService to wait for kubelet
	I0913 18:22:39.755929   14328 kubeadm.go:582] duration metric: took 10.077205426s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 18:22:39.755955   14328 node_conditions.go:102] verifying NodePressure condition ...
	I0913 18:22:39.787150   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:39.845041   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:39.934253   14328 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0913 18:22:39.934296   14328 node_conditions.go:123] node cpu capacity is 8
	I0913 18:22:39.934311   14328 node_conditions.go:105] duration metric: took 178.349477ms to run NodePressure ...
	I0913 18:22:39.934326   14328 start.go:241] waiting for startup goroutines ...
	I0913 18:22:40.361267   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:40.362048   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:40.786611   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:40.845949   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:41.286465   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:41.345160   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:41.787153   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:41.845094   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:42.286843   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:42.346439   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:42.786491   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:42.845016   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:43.340659   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:43.344242   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:43.787276   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:43.844802   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:44.442408   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:44.443277   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:44.786463   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:44.844741   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:45.287179   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:45.344440   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:45.786647   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:45.845516   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:46.285693   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:46.361321   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:46.786414   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:46.845443   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:47.286644   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:47.345643   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:47.787165   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:47.844499   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:48.286749   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:48.474499   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:48.787047   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:48.845782   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:49.287056   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:49.344329   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:49.789433   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:49.844958   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:50.360433   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:50.361224   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:50.862766   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:50.863231   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:51.286323   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:51.344199   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:51.787210   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:51.844505   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:52.289171   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:52.390735   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:52.787427   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:52.845425   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:53.286807   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:53.345620   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:53.786585   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:53.845496   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:54.286213   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:54.345608   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:54.786487   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:54.862932   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:55.286671   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:55.345831   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:55.786977   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:55.844950   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:56.287174   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:56.344859   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:56.787602   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:56.845360   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:57.286744   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:57.345529   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:57.787303   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:57.844585   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:58.286887   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:58.345643   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:58.863456   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 18:22:58.864887   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:59.286870   14328 kapi.go:107] duration metric: took 28.004041253s to wait for kubernetes.io/minikube-addons=registry ...
	I0913 18:22:59.346033   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:22:59.845320   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:00.344737   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:00.845320   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:01.345258   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:01.845115   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:02.345490   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:02.916539   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:03.345278   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:03.845118   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:04.344658   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:04.845961   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:05.361989   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:05.845142   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:06.361956   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:06.844399   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:07.345310   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:07.895620   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:08.361337   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:08.845567   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:09.344950   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:09.844524   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:10.345780   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:10.845128   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:11.345262   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:11.862386   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:12.345515   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:12.862232   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:13.345604   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:13.844968   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:14.345115   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:14.845814   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:15.345268   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:15.847085   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:16.345863   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:16.861925   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:17.345486   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:17.845293   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 18:23:18.352228   14328 kapi.go:107] duration metric: took 45.011818471s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0913 18:23:59.258969   14328 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 18:23:59.258991   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:23:59.762060   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:00.259893   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:00.760044   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:01.260622   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:01.759528   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:02.259771   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:02.760292   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:03.259541   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:03.760129   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:04.259059   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:04.759040   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:05.259825   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:05.761115   14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 18:24:06.260138   14328 kapi.go:107] duration metric: took 1m29.003982108s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0913 18:24:06.261887   14328 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0913 18:24:06.263794   14328 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0913 18:24:06.265156   14328 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0913 18:24:06.266873   14328 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, metrics-server, storage-provisioner, inspektor-gadget, storage-provisioner-rancher, yakd, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0913 18:24:06.268221   14328 addons.go:510] duration metric: took 1m36.594013586s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass metrics-server storage-provisioner inspektor-gadget storage-provisioner-rancher yakd volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0913 18:24:06.268282   14328 start.go:246] waiting for cluster config update ...
	I0913 18:24:06.268306   14328 start.go:255] writing updated cluster config ...
	I0913 18:24:06.268578   14328 exec_runner.go:51] Run: rm -f paused
	I0913 18:24:06.320579   14328 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 18:24:06.322453   14328 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-09-07 03:35:14 UTC, end at Fri 2024-09-13 18:33:58 UTC. --
	Sep 13 18:26:09 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:26:09.585545451Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 13 18:26:09 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:26:09.588131381Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 13 18:26:19 ubuntu-20-agent-9 cri-dockerd[14890]: time="2024-09-13T18:26:19Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 13 18:26:21 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:26:21.133920438Z" level=info msg="ignoring event" container=ea98f429e305e5b3ee091caae3ae86c842312f99672a1f167e4cd5b82d730b80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:27:36 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:27:36.594202477Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 13 18:27:36 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:27:36.596713132Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 13 18:29:10 ubuntu-20-agent-9 cri-dockerd[14890]: time="2024-09-13T18:29:10Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 13 18:29:11 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:29:11.886166883Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 13 18:29:11 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:29:11.886169720Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 13 18:29:11 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:29:11.888799516Z" level=error msg="Error running exec b19d1c667dc12445d7f1d6899eade1b8ea639c4e19eb1689c84ae14d8feb39c1 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 13 18:29:12 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:29:12.106435645Z" level=info msg="ignoring event" container=3a0b79a845ad96abb297eff75dfd6f542f46e16e5a12ae24ae4f9a16acf6f9c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:30:24 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:30:24.587897113Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 13 18:30:24 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:30:24.590168854Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 13 18:32:57 ubuntu-20-agent-9 cri-dockerd[14890]: time="2024-09-13T18:32:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c8d542960c28a5f60ee7b14808a2d0c8b06725da91dde87392ea9a537cf700b1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 13 18:32:58 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:32:58.102042179Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 18:32:58 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:32:58.104381189Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 18:33:10 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:10.593239793Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 18:33:10 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:10.595781327Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 18:33:35 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:35.598152859Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 18:33:35 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:35.600798103Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 18:33:57 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:57.557143568Z" level=info msg="ignoring event" container=c8d542960c28a5f60ee7b14808a2d0c8b06725da91dde87392ea9a537cf700b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:33:57 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:57.820940911Z" level=info msg="ignoring event" container=86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:33:57 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:57.880249002Z" level=info msg="ignoring event" container=c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:33:57 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:57.968309282Z" level=info msg="ignoring event" container=15e5bd4de548cae7e5969cab004c1c759e551424d13fbb53ac267520285333da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:33:58 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:58.051347560Z" level=info msg="ignoring event" container=0a9dab71ff066747b671cb9aef58137b7acdaa2ef232b2fe727a994871dbf8c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	3a0b79a845ad9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            4 minutes ago       Exited              gadget                                   6                   beea0859cec70       gadget-sm92k
	68c150b9fb0f4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   f3e7bc3d1392f       gcp-auth-89d5ffd79-rk6x4
	6a15a65f82854       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   b2e574bf39c31       csi-hostpathplugin-7rh6q
	064e874d218aa       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   b2e574bf39c31       csi-hostpathplugin-7rh6q
	d3a19cb71e4a8       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   b2e574bf39c31       csi-hostpathplugin-7rh6q
	933456c5bcb6b       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   b2e574bf39c31       csi-hostpathplugin-7rh6q
	a0d29b11b47be       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   b2e574bf39c31       csi-hostpathplugin-7rh6q
	6f1346c0a6bf1       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   b145a385c8e1c       csi-hostpath-resizer-0
	b977c7e284fc5       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   b2e574bf39c31       csi-hostpathplugin-7rh6q
	60e5ebb1e8117       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   7fa58fa79b2b3       csi-hostpath-attacher-0
	639657c030424       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   d9f3a06f87a7b       snapshot-controller-56fcc65765-s9gr4
	1af54f0913b72       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   1217f2ce294ed       snapshot-controller-56fcc65765-m7zlt
	dcf9ed79c0306       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        11 minutes ago      Running             yakd                                     0                   b71ed978dbd2d       yakd-dashboard-67d98fc6b-89dgq
	b6b0127df7bb1       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       11 minutes ago      Running             local-path-provisioner                   0                   ff6b138e17439       local-path-provisioner-86d989889c-fkb6p
	02d59fdb6d6fe       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   20deb3afade9e       metrics-server-84c5f94fbc-lkmcp
	41918d553d7c8       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   d8af440651ce8       nvidia-device-plugin-daemonset-4lxnd
	66fed67c8b963       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               11 minutes ago      Running             cloud-spanner-emulator                   0                   622159c2649f8       cloud-spanner-emulator-769b77f747-h9wlf
	129df199ba791       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   d8b66876bc7b7       storage-provisioner
	b6041e483c82c       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   024e4a894a71e       coredns-7c65d6cfc9-dzc9p
	7fb4d4de53e46       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   acfa464adfd09       kube-proxy-7h9jz
	90c524e627d39       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   410aa9552087f       kube-scheduler-ubuntu-20-agent-9
	87c623f59ed62       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   1efc08d0cd074       kube-controller-manager-ubuntu-20-agent-9
	5ea25fa40b23f       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   c595a4f1736c3       etcd-ubuntu-20-agent-9
	67a5b3e35d538       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   439fe09a35b7d       kube-apiserver-ubuntu-20-agent-9
	
	
	==> coredns [b6041e483c82] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:40349 - 16842 "HINFO IN 3748346690135090459.1710582388281534536. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01467965s
	[INFO] 10.244.0.23:46320 - 21531 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00030581s
	[INFO] 10.244.0.23:48340 - 49562 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000168313s
	[INFO] 10.244.0.23:57948 - 13450 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108658s
	[INFO] 10.244.0.23:56229 - 41882 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000143935s
	[INFO] 10.244.0.23:49747 - 25314 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009418s
	[INFO] 10.244.0.23:44463 - 2018 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143566s
	[INFO] 10.244.0.23:49505 - 44756 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.003156477s
	[INFO] 10.244.0.23:46277 - 34891 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004631362s
	[INFO] 10.244.0.23:35146 - 8262 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.002662088s
	[INFO] 10.244.0.23:41023 - 51039 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003863363s
	[INFO] 10.244.0.23:48130 - 8388 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002789013s
	[INFO] 10.244.0.23:43062 - 18106 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004005128s
	[INFO] 10.244.0.23:36714 - 22414 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001716971s
	[INFO] 10.244.0.23:55604 - 22984 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.01011837s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-9
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-9
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T18_22_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-9
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-9"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:22:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-9
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:33:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:30:02 +0000   Fri, 13 Sep 2024 18:22:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:30:02 +0000   Fri, 13 Sep 2024 18:22:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:30:02 +0000   Fri, 13 Sep 2024 18:22:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:30:02 +0000   Fri, 13 Sep 2024 18:22:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.154.0.4
	  Hostname:    ubuntu-20-agent-9
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                4894487b-7b30-e033-3a9d-c6f45b6c4cf8
	  Boot ID:                    12284a47-6cbe-446a-902c-cc7eddd0eaeb
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-769b77f747-h9wlf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-sm92k                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-rk6x4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m59s
	  kube-system                 coredns-7c65d6cfc9-dzc9p                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-7rh6q                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-9                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-9             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-9    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-7h9jz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-9             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-lkmcp              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-4lxnd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-m7zlt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-s9gr4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-fkb6p      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-89dgq               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node ubuntu-20-agent-9 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node ubuntu-20-agent-9 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node ubuntu-20-agent-9 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node ubuntu-20-agent-9 event: Registered Node ubuntu-20-agent-9 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa ed b9 a0 be ce 08 06
	[  +1.123945] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 22 ac 13 73 72 08 06
	[  +0.023561] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e 32 49 1e 41 06 08 06
	[Sep13 18:23] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 23 e5 97 ca ad 08 06
	[  +2.270764] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a e5 95 ce 38 a0 08 06
	[  +2.538963] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a e6 90 6f b1 b0 08 06
	[  +6.797879] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa b4 29 d5 65 16 08 06
	[  +0.088588] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 66 ef 9a de 80 6b 08 06
	[  +0.252507] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 c0 3c a5 9a fb 08 06
	[ +27.002758] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de f6 8b a2 18 41 08 06
	[  +0.045129] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 02 a3 be f3 c8 0a 08 06
	[Sep13 18:24] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff c2 b2 7c 7f 0c 1c 08 06
	[  +0.000477] IPv4: martian source 10.244.0.23 from 10.244.0.4, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 fd 13 7e 84 af 08 06
	
	
	==> etcd [5ea25fa40b23] <==
	{"level":"info","ts":"2024-09-13T18:22:20.898385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became candidate at term 2"}
	{"level":"info","ts":"2024-09-13T18:22:20.898391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a received MsgVoteResp from 82d4d36e40f9b4a at term 2"}
	{"level":"info","ts":"2024-09-13T18:22:20.898399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became leader at term 2"}
	{"level":"info","ts":"2024-09-13T18:22:20.898407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 82d4d36e40f9b4a elected leader 82d4d36e40f9b4a at term 2"}
	{"level":"info","ts":"2024-09-13T18:22:20.899441Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"82d4d36e40f9b4a","local-member-attributes":"{Name:ubuntu-20-agent-9 ClientURLs:[https://10.154.0.4:2379]}","request-path":"/0/members/82d4d36e40f9b4a/attributes","cluster-id":"7cf21852ad6c12ab","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T18:22:20.899453Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T18:22:20.899446Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:22:20.899486Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T18:22:20.899687Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T18:22:20.899719Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T18:22:20.900148Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7cf21852ad6c12ab","local-member-id":"82d4d36e40f9b4a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:22:20.900230Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:22:20.900261Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:22:20.900529Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T18:22:20.900727Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T18:22:20.901405Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.154.0.4:2379"}
	{"level":"info","ts":"2024-09-13T18:22:20.901524Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-13T18:22:48.471685Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.977337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-13T18:22:48.471743Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.431617ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-09-13T18:22:48.471780Z","caller":"traceutil/trace.go:171","msg":"trace[26277867] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:918; }","duration":"129.093931ms","start":"2024-09-13T18:22:48.342672Z","end":"2024-09-13T18:22:48.471766Z","steps":["trace[26277867] 'range keys from in-memory index tree'  (duration: 128.877781ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:22:48.471790Z","caller":"traceutil/trace.go:171","msg":"trace[1115861767] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:918; }","duration":"137.485361ms","start":"2024-09-13T18:22:48.334292Z","end":"2024-09-13T18:22:48.471777Z","steps":["trace[1115861767] 'range keys from in-memory index tree'  (duration: 137.239788ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:22:48.585565Z","caller":"traceutil/trace.go:171","msg":"trace[1307810227] transaction","detail":"{read_only:false; response_revision:919; number_of_response:1; }","duration":"110.432471ms","start":"2024-09-13T18:22:48.475112Z","end":"2024-09-13T18:22:48.585544Z","steps":["trace[1307810227] 'process raft request'  (duration: 110.222141ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:32:20.916742Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1691}
	{"level":"info","ts":"2024-09-13T18:32:20.941506Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1691,"took":"24.243402ms","hash":2666877437,"current-db-size-bytes":8028160,"current-db-size":"8.0 MB","current-db-size-in-use-bytes":4214784,"current-db-size-in-use":"4.2 MB"}
	{"level":"info","ts":"2024-09-13T18:32:20.941577Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2666877437,"revision":1691,"compact-revision":-1}
	
	
	==> gcp-auth [68c150b9fb0f] <==
	2024/09/13 18:24:05 GCP Auth Webhook started!
	2024/09/13 18:24:21 Ready to marshal response ...
	2024/09/13 18:24:21 Ready to write response ...
	2024/09/13 18:24:21 Ready to marshal response ...
	2024/09/13 18:24:21 Ready to write response ...
	2024/09/13 18:24:44 Ready to marshal response ...
	2024/09/13 18:24:44 Ready to write response ...
	2024/09/13 18:24:45 Ready to marshal response ...
	2024/09/13 18:24:45 Ready to write response ...
	2024/09/13 18:24:45 Ready to marshal response ...
	2024/09/13 18:24:45 Ready to write response ...
	2024/09/13 18:32:57 Ready to marshal response ...
	2024/09/13 18:32:57 Ready to write response ...
	
	
	==> kernel <==
	 18:33:58 up 16 min,  0 users,  load average: 0.08, 0.29, 0.33
	Linux ubuntu-20-agent-9 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [67a5b3e35d53] <==
	W0913 18:23:18.404904       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.38.240:443: connect: connection refused
	W0913 18:23:40.273496       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.165.166:443: connect: connection refused
	E0913 18:23:40.273531       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.165.166:443: connect: connection refused" logger="UnhandledError"
	W0913 18:23:40.300745       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.165.166:443: connect: connection refused
	E0913 18:23:40.300795       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.165.166:443: connect: connection refused" logger="UnhandledError"
	W0913 18:23:59.232086       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.165.166:443: connect: connection refused
	E0913 18:23:59.232129       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.165.166:443: connect: connection refused" logger="UnhandledError"
	I0913 18:24:21.595117       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0913 18:24:21.612571       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0913 18:24:35.027340       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0913 18:24:35.032155       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0913 18:24:35.134419       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0913 18:24:35.181287       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0913 18:24:35.190535       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0913 18:24:35.340194       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0913 18:24:35.341915       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0913 18:24:35.367314       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0913 18:24:35.408498       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0913 18:24:36.055218       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0913 18:24:36.243880       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0913 18:24:36.329146       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0913 18:24:36.366746       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0913 18:24:36.366768       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0913 18:24:36.409428       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0913 18:24:36.578861       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [87c623f59ed6] <==
	W0913 18:32:47.967749       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:32:47.967791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:32:50.870333       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:32:50.870378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:32:51.940442       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:32:51.940488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:33:05.315523       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:05.315570       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:33:05.609278       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:05.609320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:33:08.433760       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:08.433803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:33:09.786230       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:09.786304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:33:21.678124       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:21.678171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:33:25.843452       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:25.843494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:33:37.224302       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:37.224344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:33:41.045683       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:41.045724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:33:42.562921       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:42.562965       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 18:33:57.784797       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="12.007µs"
	
	
	==> kube-proxy [7fb4d4de53e4] <==
	I0913 18:22:30.465267       1 server_linux.go:66] "Using iptables proxy"
	I0913 18:22:30.667673       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.154.0.4"]
	E0913 18:22:30.670494       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:22:30.745651       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0913 18:22:30.745708       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:22:30.748909       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:22:30.749253       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:22:30.749269       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:22:30.756009       1 config.go:199] "Starting service config controller"
	I0913 18:22:30.756066       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:22:30.756149       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:22:30.756157       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:22:30.756972       1 config.go:328] "Starting node config controller"
	I0913 18:22:30.756984       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 18:22:30.857263       1 shared_informer.go:320] Caches are synced for node config
	I0913 18:22:30.857302       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:22:30.857344       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [90c524e627d3] <==
	W0913 18:22:21.784364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0913 18:22:21.784399       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 18:22:21.784392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0913 18:22:21.784422       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:21.784471       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0913 18:22:21.784507       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 18:22:21.784498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0913 18:22:21.784525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:22.613246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 18:22:22.613285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:22.664007       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 18:22:22.664051       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0913 18:22:22.715046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0913 18:22:22.715086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:22.718574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0913 18:22:22.718615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:22.733036       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 18:22:22.733083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:22.793057       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 18:22:22.793107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:22.980044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0913 18:22:22.980093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 18:22:22.999578       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 18:22:22.999619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0913 18:22:25.782015       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-09-07 03:35:14 UTC, end at Fri 2024-09-13 18:33:58 UTC. --
	Sep 13 18:33:51 ubuntu-20-agent-9 kubelet[15819]: E0913 18:33:51.459152   15819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="64d5301d-22c2-4431-8f82-d176079a0e29"
	Sep 13 18:33:57 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:57.773826   15819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/14b30f2c-4501-4306-a9f9-67206c2861f5-gcp-creds\") pod \"14b30f2c-4501-4306-a9f9-67206c2861f5\" (UID: \"14b30f2c-4501-4306-a9f9-67206c2861f5\") "
	Sep 13 18:33:57 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:57.773910   15819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpp5v\" (UniqueName: \"kubernetes.io/projected/14b30f2c-4501-4306-a9f9-67206c2861f5-kube-api-access-kpp5v\") pod \"14b30f2c-4501-4306-a9f9-67206c2861f5\" (UID: \"14b30f2c-4501-4306-a9f9-67206c2861f5\") "
	Sep 13 18:33:57 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:57.773997   15819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14b30f2c-4501-4306-a9f9-67206c2861f5-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "14b30f2c-4501-4306-a9f9-67206c2861f5" (UID: "14b30f2c-4501-4306-a9f9-67206c2861f5"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 13 18:33:57 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:57.776685   15819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b30f2c-4501-4306-a9f9-67206c2861f5-kube-api-access-kpp5v" (OuterVolumeSpecName: "kube-api-access-kpp5v") pod "14b30f2c-4501-4306-a9f9-67206c2861f5" (UID: "14b30f2c-4501-4306-a9f9-67206c2861f5"). InnerVolumeSpecName "kube-api-access-kpp5v". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:33:57 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:57.874858   15819 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/14b30f2c-4501-4306-a9f9-67206c2861f5-gcp-creds\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 13 18:33:57 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:57.874891   15819 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kpp5v\" (UniqueName: \"kubernetes.io/projected/14b30f2c-4501-4306-a9f9-67206c2861f5-kube-api-access-kpp5v\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: E0913 18:33:58.059735   15819 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod01e1c35f-1c90-440d-92e7-defa8bfc5517/0a9dab71ff066747b671cb9aef58137b7acdaa2ef232b2fe727a994871dbf8c2\": RecentStats: unable to find data in memory cache]"
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.076685   15819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqgvq\" (UniqueName: \"kubernetes.io/projected/9d207cfe-fc0d-47fe-ae8e-3720eb38b045-kube-api-access-lqgvq\") pod \"9d207cfe-fc0d-47fe-ae8e-3720eb38b045\" (UID: \"9d207cfe-fc0d-47fe-ae8e-3720eb38b045\") "
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.079144   15819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d207cfe-fc0d-47fe-ae8e-3720eb38b045-kube-api-access-lqgvq" (OuterVolumeSpecName: "kube-api-access-lqgvq") pod "9d207cfe-fc0d-47fe-ae8e-3720eb38b045" (UID: "9d207cfe-fc0d-47fe-ae8e-3720eb38b045"). InnerVolumeSpecName "kube-api-access-lqgvq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.177391   15819 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lqgvq\" (UniqueName: \"kubernetes.io/projected/9d207cfe-fc0d-47fe-ae8e-3720eb38b045-kube-api-access-lqgvq\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.277822   15819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tnb6\" (UniqueName: \"kubernetes.io/projected/01e1c35f-1c90-440d-92e7-defa8bfc5517-kube-api-access-7tnb6\") pod \"01e1c35f-1c90-440d-92e7-defa8bfc5517\" (UID: \"01e1c35f-1c90-440d-92e7-defa8bfc5517\") "
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.279871   15819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e1c35f-1c90-440d-92e7-defa8bfc5517-kube-api-access-7tnb6" (OuterVolumeSpecName: "kube-api-access-7tnb6") pod "01e1c35f-1c90-440d-92e7-defa8bfc5517" (UID: "01e1c35f-1c90-440d-92e7-defa8bfc5517"). InnerVolumeSpecName "kube-api-access-7tnb6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.321526   15819 scope.go:117] "RemoveContainer" containerID="c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081"
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.341116   15819 scope.go:117] "RemoveContainer" containerID="c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081"
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: E0913 18:33:58.342642   15819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081" containerID="c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081"
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.342709   15819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081"} err="failed to get container status \"c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081\": rpc error: code = Unknown desc = Error response from daemon: No such container: c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081"
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.342755   15819 scope.go:117] "RemoveContainer" containerID="86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8"
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.367581   15819 scope.go:117] "RemoveContainer" containerID="86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8"
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: E0913 18:33:58.368579   15819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8" containerID="86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8"
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.368628   15819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8"} err="failed to get container status \"86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8\": rpc error: code = Unknown desc = Error response from daemon: No such container: 86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8"
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.378194   15819 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7tnb6\" (UniqueName: \"kubernetes.io/projected/01e1c35f-1c90-440d-92e7-defa8bfc5517-kube-api-access-7tnb6\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.469680   15819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e1c35f-1c90-440d-92e7-defa8bfc5517" path="/var/lib/kubelet/pods/01e1c35f-1c90-440d-92e7-defa8bfc5517/volumes"
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.470002   15819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14b30f2c-4501-4306-a9f9-67206c2861f5" path="/var/lib/kubelet/pods/14b30f2c-4501-4306-a9f9-67206c2861f5/volumes"
	Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.470186   15819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d207cfe-fc0d-47fe-ae8e-3720eb38b045" path="/var/lib/kubelet/pods/9d207cfe-fc0d-47fe-ae8e-3720eb38b045/volumes"
	
	
	==> storage-provisioner [129df199ba79] <==
	I0913 18:22:32.243137       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 18:22:32.254080       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 18:22:32.254132       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 18:22:32.265872       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 18:22:32.266068       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_f1861157-ef62-4af4-8fbc-179a6d9017f4!
	I0913 18:22:32.268098       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ebc40ac-d000-4dbb-a657-8fb345c1c3c9", APIVersion:"v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-9_f1861157-ef62-4af4-8fbc-179a6d9017f4 became leader
	I0913 18:22:32.366685       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_f1861157-ef62-4af4-8fbc-179a6d9017f4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-9/10.154.0.4
	Start Time:       Fri, 13 Sep 2024 18:24:45 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gvhpp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gvhpp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-9
	  Normal   Pulling    7m50s (x4 over 9m14s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m50s (x4 over 9m14s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m50s (x4 over 9m14s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m24s (x6 over 9m14s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m2s (x21 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.88s)

                                                
                                    

Test pass (104/166)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 1.82
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 1.52
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.12
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
22 TestOffline 79.05
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 116.3
29 TestAddons/serial/Volcano 38.47
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 11.49
36 TestAddons/parallel/MetricsServer 5.4
38 TestAddons/parallel/CSI 31.08
39 TestAddons/parallel/Headlamp 16.87
40 TestAddons/parallel/CloudSpanner 6.26
42 TestAddons/parallel/NvidiaDevicePlugin 5.23
43 TestAddons/parallel/Yakd 10.42
44 TestAddons/StoppedEnableDisable 10.7
46 TestCertExpiration 228.61
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 25.79
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 26.16
61 TestFunctional/serial/KubeContext 0.05
62 TestFunctional/serial/KubectlGetPods 0.07
64 TestFunctional/serial/MinikubeKubectlCmd 0.11
65 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
66 TestFunctional/serial/ExtraConfig 36.28
67 TestFunctional/serial/ComponentHealth 0.07
68 TestFunctional/serial/LogsCmd 0.87
69 TestFunctional/serial/LogsFileCmd 0.9
70 TestFunctional/serial/InvalidService 4.86
72 TestFunctional/parallel/ConfigCmd 0.28
73 TestFunctional/parallel/DashboardCmd 5.04
74 TestFunctional/parallel/DryRun 0.17
75 TestFunctional/parallel/InternationalLanguage 0.09
76 TestFunctional/parallel/StatusCmd 0.43
79 TestFunctional/parallel/ProfileCmd/profile_not_create 0.25
80 TestFunctional/parallel/ProfileCmd/profile_list 0.24
81 TestFunctional/parallel/ProfileCmd/profile_json_output 0.24
83 TestFunctional/parallel/ServiceCmd/DeployApp 9.14
84 TestFunctional/parallel/ServiceCmd/List 0.35
85 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
86 TestFunctional/parallel/ServiceCmd/HTTPS 0.16
87 TestFunctional/parallel/ServiceCmd/Format 0.15
88 TestFunctional/parallel/ServiceCmd/URL 0.16
89 TestFunctional/parallel/ServiceCmdConnect 8.32
90 TestFunctional/parallel/AddonsCmd 0.11
91 TestFunctional/parallel/PersistentVolumeClaim 20.23
104 TestFunctional/parallel/MySQL 22.46
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 13.58
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 14.09
113 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/Version/short 0.04
118 TestFunctional/parallel/Version/components 0.39
119 TestFunctional/parallel/License 0.94
120 TestFunctional/delete_echo-server_images 0.03
121 TestFunctional/delete_my-image_image 0.02
122 TestFunctional/delete_minikube_cached_images 0.02
127 TestImageBuild/serial/Setup 14.37
128 TestImageBuild/serial/NormalBuild 2.78
129 TestImageBuild/serial/BuildWithBuildArg 0.9
130 TestImageBuild/serial/BuildWithDockerIgnore 0.63
131 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.78
135 TestJSONOutput/start/Command 29.69
136 TestJSONOutput/start/Audit 0
138 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/pause/Command 0.52
142 TestJSONOutput/pause/Audit 0
144 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
147 TestJSONOutput/unpause/Command 0.39
148 TestJSONOutput/unpause/Audit 0
150 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/stop/Command 10.48
154 TestJSONOutput/stop/Audit 0
156 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
158 TestErrorJSONOutput 0.21
163 TestMainNoArgs 0.05
164 TestMinikubeProfile 34.2
172 TestPause/serial/Start 26.79
173 TestPause/serial/SecondStartNoReconfiguration 30.34
174 TestPause/serial/Pause 0.51
175 TestPause/serial/VerifyStatus 0.13
176 TestPause/serial/Unpause 0.42
177 TestPause/serial/PauseAgain 0.54
178 TestPause/serial/DeletePaused 1.65
179 TestPause/serial/VerifyDeletedResources 0.06
193 TestRunningBinaryUpgrade 75.42
195 TestStoppedBinaryUpgrade/Setup 2.3
196 TestStoppedBinaryUpgrade/Upgrade 50.75
197 TestStoppedBinaryUpgrade/MinikubeLogs 0.85
198 TestKubernetesUpgrade 318.26
x
+
TestDownloadOnly/v1.20.0/json-events (1.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.820711395s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (1.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (56.296987ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:20:46
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:20:46.159578   10486 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:20:46.159841   10486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:46.159850   10486 out.go:358] Setting ErrFile to fd 2...
	I0913 18:20:46.159854   10486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:46.160046   10486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3707/.minikube/bin
	W0913 18:20:46.160159   10486 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19636-3707/.minikube/config/config.json: open /home/jenkins/minikube-integration/19636-3707/.minikube/config/config.json: no such file or directory
	I0913 18:20:46.160701   10486 out.go:352] Setting JSON to true
	I0913 18:20:46.161564   10486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":186,"bootTime":1726251460,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:20:46.161649   10486 start.go:139] virtualization: kvm guest
	I0913 18:20:46.164329   10486 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0913 18:20:46.164428   10486 notify.go:220] Checking for updates...
	W0913 18:20:46.164430   10486 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19636-3707/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 18:20:46.165912   10486 out.go:169] MINIKUBE_LOCATION=19636
	I0913 18:20:46.167438   10486 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:20:46.168876   10486 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19636-3707/kubeconfig
	I0913 18:20:46.170243   10486 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3707/.minikube
	I0913 18:20:46.171444   10486 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (1.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.524605355s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (1.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (55.392176ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 18:20:48
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 18:20:48.274481   10637 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:20:48.274741   10637 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:48.274750   10637 out.go:358] Setting ErrFile to fd 2...
	I0913 18:20:48.274755   10637 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:20:48.274928   10637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3707/.minikube/bin
	I0913 18:20:48.275467   10637 out.go:352] Setting JSON to true
	I0913 18:20:48.276267   10637 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":188,"bootTime":1726251460,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:20:48.276362   10637 start.go:139] virtualization: kvm guest
	I0913 18:20:48.278653   10637 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0913 18:20:48.278769   10637 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19636-3707/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 18:20:48.278821   10637 notify.go:220] Checking for updates...
	I0913 18:20:48.280258   10637 out.go:169] MINIKUBE_LOCATION=19636
	I0913 18:20:48.281707   10637 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:20:48.282960   10637 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19636-3707/kubeconfig
	I0913 18:20:48.284077   10637 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3707/.minikube
	I0913 18:20:48.285347   10637 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:37771 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (79.05s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (1m17.342067173s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.711965634s)
--- PASS: TestOffline (79.05s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (45.513044ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (49.265259ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (116.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm: (1m56.302466014s)
--- PASS: TestAddons/Setup (116.30s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.47s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 9.017672ms
addons_test.go:843: volcano-admission stabilized in 9.060927ms
addons_test.go:851: volcano-controller stabilized in 9.105339ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-4svsc" [cea2d4b6-e528-4978-8d7b-1f49e0971283] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003483808s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-zj4c2" [ceff8247-0ce1-46ee-994f-6467f23514b0] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003827049s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-mqt5z" [38e357ad-5d8c-4700-becf-c34b62e0c70c] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003299343s
addons_test.go:870: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [4adce005-b71f-4059-9aa4-2fc7d8369390] Pending
helpers_test.go:344: "test-job-nginx-0" [4adce005-b71f-4059-9aa4-2fc7d8369390] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [4adce005-b71f-4059-9aa4-2fc7d8369390] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003373403s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.128739682s)
--- PASS: TestAddons/serial/Volcano (38.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.49s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-sm92k" [60d1040b-ac3e-4a14-b223-653d5b3bb594] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004674421s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.481776421s)
--- PASS: TestAddons/parallel/InspektorGadget (11.49s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 1.935988ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-lkmcp" [5492915c-f03f-42c5-aae6-2a86f778d2cc] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004093837s
addons_test.go:413: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (31.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.033595ms
addons_test.go:508: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7abb593e-0bf1-4226-8419-26a89e1e7e26] Pending
helpers_test.go:344: "task-pv-pod" [7abb593e-0bf1-4226-8419-26a89e1e7e26] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7abb593e-0bf1-4226-8419-26a89e1e7e26] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.00387076s
addons_test.go:528: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context minikube delete pod task-pv-pod: (1.307212185s)
addons_test.go:544: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [65a5bf74-c798-4d16-aef5-85af867c6c39] Pending
helpers_test.go:344: "task-pv-pod-restore" [65a5bf74-c798-4d16-aef5-85af867c6c39] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [65a5bf74-c798-4d16-aef5-85af867c6c39] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003424932s
addons_test.go:570: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.296278049s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (31.08s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-btg7j" [7347bc17-b0f7-4de0-b523-38beec58d7b0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-btg7j" [7347bc17-b0f7-4de0-b523-38beec58d7b0] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003843591s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.400204579s)
--- PASS: TestAddons/parallel/Headlamp (16.87s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-h9wlf" [9838c026-216c-41e1-b35b-e5de456b9b40] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003874331s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (6.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.23s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4lxnd" [4a7fb3ca-f619-4ff0-9c91-dff0f066b225] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003570066s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.23s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-89dgq" [1a781d52-694c-4d12-8ab8-30e675cd46f7] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003436416s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.412312189s)
--- PASS: TestAddons/parallel/Yakd (10.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.7s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.38863588s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.70s)

                                                
                                    
x
+
TestCertExpiration (228.61s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.748211488s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (32.154521365s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.710603371s)
--- PASS: TestCertExpiration (228.61s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19636-3707/.minikube/files/etc/test/nested/copy/10474/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (25.79s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (25.78511467s)
--- PASS: TestFunctional/serial/StartWithProxy (25.79s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.16s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (26.160471304s)
functional_test.go:663: soft start took 26.161138079s for "minikube" cluster.
--- PASS: TestFunctional/serial/SoftStart (26.16s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.283566045s)
functional_test.go:761: restart took 36.283703559s for "minikube" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.87s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd2424087882/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.90s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.86s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (187.275896ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL           |
	|-----------|-------------|-------------|-------------------------|
	| default   | invalid-svc |          80 | http://10.154.0.4:32128 |
	|-----------|-------------|-------------|-------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context minikube delete -f testdata/invalidsvc.yaml: (1.490142871s)
--- PASS: TestFunctional/serial/InvalidService (4.86s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (44.608954ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (42.901672ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/13 18:41:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 45240: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (87.323057ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3707/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3707/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:41:07.010305   45581 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:41:07.010453   45581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:41:07.010466   45581 out.go:358] Setting ErrFile to fd 2...
	I0913 18:41:07.010473   45581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:41:07.010654   45581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3707/.minikube/bin
	I0913 18:41:07.011199   45581 out.go:352] Setting JSON to false
	I0913 18:41:07.012359   45581 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1407,"bootTime":1726251460,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:41:07.012456   45581 start.go:139] virtualization: kvm guest
	I0913 18:41:07.015317   45581 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0913 18:41:07.017380   45581 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19636-3707/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 18:41:07.017434   45581 notify.go:220] Checking for updates...
	I0913 18:41:07.017458   45581 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:41:07.019422   45581 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:41:07.021198   45581 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3707/kubeconfig
	I0913 18:41:07.023070   45581 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3707/.minikube
	I0913 18:41:07.024685   45581 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:41:07.026104   45581 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:41:07.028278   45581 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:41:07.028607   45581 exec_runner.go:51] Run: systemctl --version
	I0913 18:41:07.031582   45581 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:41:07.044793   45581 out.go:177] * Using the none driver based on existing profile
	I0913 18:41:07.046166   45581 start.go:297] selected driver: none
	I0913 18:41:07.046185   45581 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:41:07.046346   45581 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:41:07.046375   45581 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0913 18:41:07.046759   45581 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0913 18:41:07.048695   45581 out.go:201] 
	W0913 18:41:07.050071   45581 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0913 18:41:07.052085   45581 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (87.772566ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3707/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3707/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 18:41:07.184719   45626 out.go:345] Setting OutFile to fd 1 ...
	I0913 18:41:07.184839   45626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:41:07.184847   45626 out.go:358] Setting ErrFile to fd 2...
	I0913 18:41:07.184851   45626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 18:41:07.185094   45626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3707/.minikube/bin
	I0913 18:41:07.185646   45626 out.go:352] Setting JSON to false
	I0913 18:41:07.186670   45626 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1407,"bootTime":1726251460,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 18:41:07.186770   45626 start.go:139] virtualization: kvm guest
	I0913 18:41:07.189244   45626 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0913 18:41:07.191057   45626 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19636-3707/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 18:41:07.191092   45626 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 18:41:07.191168   45626 notify.go:220] Checking for updates...
	I0913 18:41:07.194212   45626 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 18:41:07.195610   45626 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19636-3707/kubeconfig
	I0913 18:41:07.197135   45626 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3707/.minikube
	I0913 18:41:07.198750   45626 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 18:41:07.200315   45626 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 18:41:07.202374   45626 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 18:41:07.202640   45626 exec_runner.go:51] Run: systemctl --version
	I0913 18:41:07.205432   45626 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 18:41:07.217161   45626 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0913 18:41:07.218794   45626 start.go:297] selected driver: none
	I0913 18:41:07.218829   45626 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 18:41:07.218943   45626 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 18:41:07.218968   45626 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0913 18:41:07.219264   45626 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0913 18:41:07.222438   45626 out.go:201] 
	W0913 18:41:07.224072   45626 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0913 18:41:07.225395   45626 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "190.077292ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "46.840053ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "190.912451ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "48.742606ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-mjjgl" [0c52d520-2814-4e2a-8c24-373a197d0148] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-mjjgl" [0c52d520-2814-4e2a-8c24-373a197d0148] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003954064s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "338.065499ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.154.0.4:30969
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.154.0.4:30969
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-8fpjf" [a9790bc3-8db5-4851-a2c3-720ecf496b43] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-8fpjf" [a9790bc3-8db5-4851-a2c3-720ecf496b43] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004309725s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.154.0.4:32546
functional_test.go:1675: http://10.154.0.4:32546: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-8fpjf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.154.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.154.0.4:32546
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.32s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c851a682-1268-40ed-9b94-0c070bea748c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003459503s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [50837015-4dbf-45f5-b222-00086c41dfa1] Pending
helpers_test.go:344: "sp-pod" [50837015-4dbf-45f5-b222-00086c41dfa1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [50837015-4dbf-45f5-b222-00086c41dfa1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003573329s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4c57ee1e-c2a3-4c77-bfe4-3663aed767bb] Pending
helpers_test.go:344: "sp-pod" [4c57ee1e-c2a3-4c77-bfe4-3663aed767bb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4c57ee1e-c2a3-4c77-bfe4-3663aed767bb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00348286s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.23s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-q2hb2" [f492b6a7-c7ac-497d-a101-b16247b24278] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-q2hb2" [f492b6a7-c7ac-497d-a101-b16247b24278] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.004416543s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-q2hb2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-q2hb2 -- mysql -ppassword -e "show databases;": exit status 1 (175.307351ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-q2hb2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-q2hb2 -- mysql -ppassword -e "show databases;": exit status 1 (111.318758ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-q2hb2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-q2hb2 -- mysql -ppassword -e "show databases;": exit status 1 (116.468706ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-q2hb2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.582899209s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (14.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.088607244s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (14.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.94s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.372737274s)
--- PASS: TestImageBuild/serial/Setup (14.37s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (2.781289854s)
--- PASS: TestImageBuild/serial/NormalBuild (2.78s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.90s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.63s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (29.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (29.689622675s)
--- PASS: TestJSONOutput/start/Command (29.69s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.39s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.39s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.48s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.47631547s)
--- PASS: TestJSONOutput/stop/Command (10.48s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.104597ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e40b29fc-89cf-4e0e-bf05-4a9bef4a856f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"aac0cf4c-df34-45e2-9b40-78362a44bca3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19636"}}
	{"specversion":"1.0","id":"f07c04e3-1d8f-4714-ad61-148738fd9a9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2164ab2a-2260-495e-bb09-0c9302ea394e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19636-3707/kubeconfig"}}
	{"specversion":"1.0","id":"cfebdd1c-999b-4b22-9596-341e2e3f662b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3707/.minikube"}}
	{"specversion":"1.0","id":"497004c2-485d-4faf-998f-74a34ce6209c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ad04b6dc-93bc-4e41-9ccb-659df49195c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ce84bae1-195c-491e-85c7-7de6af5522dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (34.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.605542172s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.738831019s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.220384552s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
E0913 18:44:06.332216   10474 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:06.339108   10474 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:06.350538   10474 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:06.371961   10474 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMinikubeProfile (34.20s)

                                                
                                    
x
+
TestPause/serial/Start (26.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
E0913 18:44:06.413796   10474 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:06.495885   10474 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:06.657418   10474 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:06.979079   10474 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:07.621127   10474 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
E0913 18:44:08.902431   10474 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (26.792607706s)
--- PASS: TestPause/serial/Start (26.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (30.337007147s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.34s)

                                                
                                    
x
+
TestPause/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.51s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (125.525536ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.42s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.42s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.54s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.54s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.65s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.645942801s)
--- PASS: TestPause/serial/DeletePaused (1.65s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (75.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.462197264 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.462197264 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (34.717106164s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (34.992519051s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.019419681s)
--- PASS: TestRunningBinaryUpgrade (75.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (50.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3607422545 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3607422545 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.973976771s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3607422545 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3607422545 -p minikube stop: (23.748173501s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.029614484s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (50.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (318.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (30.391571116s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.337049084s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (84.418184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m17.79433299s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (75.287783ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19636
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19636-3707/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3707/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.288912434s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.222683892s)
--- PASS: TestKubernetesUpgrade (318.26s)

                                                
                                    

Test skip (61/166)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
37 TestAddons/parallel/Olm 0
41 TestAddons/parallel/LocalPath 0
45 TestCertOptions 0
47 TestDockerFlags 0
48 TestForceSystemdFlag 0
49 TestForceSystemdEnv 0
50 TestDockerEnvContainerd 0
51 TestKVMDriverInstallOrUpdate 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
54 TestErrorSpam 0
63 TestFunctional/serial/CacheCmd 0
77 TestFunctional/parallel/MountCmd 0
94 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
95 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
96 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
97 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
98 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
99 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
100 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
102 TestFunctional/parallel/SSHCmd 0
103 TestFunctional/parallel/CpCmd 0
105 TestFunctional/parallel/FileSync 0
106 TestFunctional/parallel/CertSync 0
111 TestFunctional/parallel/DockerEnv 0
112 TestFunctional/parallel/PodmanEnv 0
114 TestFunctional/parallel/ImageCommands 0
115 TestFunctional/parallel/NonActiveRuntimeDisabled 0
123 TestGvisorAddon 0
124 TestMultiControlPlane 0
132 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
159 TestKicCustomNetwork 0
160 TestKicExistingNetwork 0
161 TestKicCustomSubnet 0
162 TestKicStaticIP 0
165 TestMountStart 0
166 TestMultiNode 0
167 TestNetworkPlugins 0
168 TestNoKubernetes 0
169 TestChangeNoneUser 0
180 TestPreload 0
181 TestScheduledStopWindows 0
182 TestScheduledStopUnix 0
183 TestSkaffold 0
186 TestStartStop/group/old-k8s-version 0.13
187 TestStartStop/group/newest-cni 0.13
188 TestStartStop/group/default-k8s-diff-port 0.13
189 TestStartStop/group/no-preload 0.13
190 TestStartStop/group/disable-driver-mounts 0.13
191 TestStartStop/group/embed-certs 0.13
192 TestInsufficientStorage 0
199 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:194: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:916: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.13s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard