Test Report: none_Linux 19649

                    
                      32fce3c1cb58db02ee1cd4b36165a584c8a30f83:2024-09-16:36244
                    
                

Test fail (1/167)

Order failed test Duration
33 TestAddons/parallel/Registry 71.83
x
+
TestAddons/parallel/Registry (71.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.795599ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-cg447" [5f1f5d00-6103-4a68-904b-02c56427ec65] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003494245s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-q8dsq" [0679d7d4-7310-4135-aee2-0654bc4e3e00] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003879005s
addons_test.go:342: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.085267608s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/16 17:25:14 [DEBUG] GET http://10.164.0.3:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC | 16 Sep 24 17:11 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC | 16 Sep 24 17:11 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC | 16 Sep 24 17:11 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC | 16 Sep 24 17:11 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC | 16 Sep 24 17:11 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC | 16 Sep 24 17:11 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:35125               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC | 16 Sep 24 17:11 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC | 16 Sep 24 17:12 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 16 Sep 24 17:12 UTC | 16 Sep 24 17:12 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 16 Sep 24 17:12 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 16 Sep 24 17:12 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 16 Sep 24 17:12 UTC | 16 Sep 24 17:15 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 16 Sep 24 17:15 UTC | 16 Sep 24 17:16 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 16 Sep 24 17:25 UTC | 16 Sep 24 17:25 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 16 Sep 24 17:25 UTC | 16 Sep 24 17:25 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 17:12:26
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 17:12:26.144632  387181 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:12:26.144765  387181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:12:26.144776  387181 out.go:358] Setting ErrFile to fd 2...
	I0916 17:12:26.144782  387181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:12:26.144963  387181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-376650/.minikube/bin
	I0916 17:12:26.145540  387181 out.go:352] Setting JSON to false
	I0916 17:12:26.146463  387181 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3289,"bootTime":1726503457,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:12:26.146577  387181 start.go:139] virtualization: kvm guest
	I0916 17:12:26.148785  387181 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 17:12:26.150077  387181 out.go:177]   - MINIKUBE_LOCATION=19649
	W0916 17:12:26.150063  387181 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19649-376650/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 17:12:26.150085  387181 notify.go:220] Checking for updates...
	I0916 17:12:26.152618  387181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:12:26.153913  387181 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-376650/kubeconfig
	I0916 17:12:26.155056  387181 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-376650/.minikube
	I0916 17:12:26.156188  387181 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 17:12:26.157474  387181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 17:12:26.158862  387181 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:12:26.169461  387181 out.go:177] * Using the none driver based on user configuration
	I0916 17:12:26.170545  387181 start.go:297] selected driver: none
	I0916 17:12:26.170557  387181 start.go:901] validating driver "none" against <nil>
	I0916 17:12:26.170568  387181 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 17:12:26.170612  387181 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 17:12:26.170888  387181 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0916 17:12:26.171459  387181 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 17:12:26.171774  387181 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 17:12:26.171809  387181 cni.go:84] Creating CNI manager for ""
	I0916 17:12:26.171874  387181 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 17:12:26.171886  387181 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 17:12:26.171974  387181 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:12:26.174098  387181 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0916 17:12:26.175413  387181 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/config.json ...
	I0916 17:12:26.175442  387181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/config.json: {Name:mk101ec0f5a26760b7b6ea3f4f1fd448438fbc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:12:26.175546  387181 start.go:360] acquireMachinesLock for minikube: {Name:mk69a376b264c055afdb850c2ed62dd77462ba3a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 17:12:26.175573  387181 start.go:364] duration metric: took 15.781µs to acquireMachinesLock for "minikube"
	I0916 17:12:26.175590  387181 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 17:12:26.175647  387181 start.go:125] createHost starting for "" (driver="none")
	I0916 17:12:26.177074  387181 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0916 17:12:26.178267  387181 exec_runner.go:51] Run: systemctl --version
	I0916 17:12:26.180783  387181 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0916 17:12:26.180828  387181 client.go:168] LocalClient.Create starting
	I0916 17:12:26.180891  387181 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-376650/.minikube/certs/ca.pem
	I0916 17:12:26.180928  387181 main.go:141] libmachine: Decoding PEM data...
	I0916 17:12:26.180941  387181 main.go:141] libmachine: Parsing certificate...
	I0916 17:12:26.180990  387181 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-376650/.minikube/certs/cert.pem
	I0916 17:12:26.181011  387181 main.go:141] libmachine: Decoding PEM data...
	I0916 17:12:26.181019  387181 main.go:141] libmachine: Parsing certificate...
	I0916 17:12:26.181331  387181 client.go:171] duration metric: took 491.665µs to LocalClient.Create
	I0916 17:12:26.181354  387181 start.go:167] duration metric: took 572.951µs to libmachine.API.Create "minikube"
	I0916 17:12:26.181365  387181 start.go:293] postStartSetup for "minikube" (driver="none")
	I0916 17:12:26.181411  387181 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 17:12:26.181445  387181 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 17:12:26.190028  387181 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 17:12:26.190055  387181 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 17:12:26.190064  387181 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 17:12:26.191817  387181 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0916 17:12:26.192889  387181 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-376650/.minikube/addons for local assets ...
	I0916 17:12:26.192944  387181 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-376650/.minikube/files for local assets ...
	I0916 17:12:26.192971  387181 start.go:296] duration metric: took 11.600584ms for postStartSetup
	I0916 17:12:26.193550  387181 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/config.json ...
	I0916 17:12:26.193687  387181 start.go:128] duration metric: took 18.030969ms to createHost
	I0916 17:12:26.193704  387181 start.go:83] releasing machines lock for "minikube", held for 18.120763ms
	I0916 17:12:26.194067  387181 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 17:12:26.194190  387181 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0916 17:12:26.195943  387181 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 17:12:26.195998  387181 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 17:12:26.205133  387181 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 17:12:26.205159  387181 start.go:495] detecting cgroup driver to use...
	I0916 17:12:26.205196  387181 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 17:12:26.205408  387181 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 17:12:26.223785  387181 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 17:12:26.232296  387181 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 17:12:26.240699  387181 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 17:12:26.240789  387181 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 17:12:26.250463  387181 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 17:12:26.259769  387181 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 17:12:26.269437  387181 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 17:12:26.277868  387181 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 17:12:26.287090  387181 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 17:12:26.296426  387181 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 17:12:26.304826  387181 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 17:12:26.313675  387181 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 17:12:26.321081  387181 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 17:12:26.329252  387181 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 17:12:26.551385  387181 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0916 17:12:26.616753  387181 start.go:495] detecting cgroup driver to use...
	I0916 17:12:26.616813  387181 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 17:12:26.616955  387181 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 17:12:26.635485  387181 exec_runner.go:51] Run: which cri-dockerd
	I0916 17:12:26.636420  387181 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 17:12:26.645161  387181 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0916 17:12:26.645182  387181 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 17:12:26.645221  387181 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 17:12:26.652950  387181 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 17:12:26.653141  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1716944367 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0916 17:12:26.661135  387181 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0916 17:12:26.902307  387181 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0916 17:12:27.132499  387181 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 17:12:27.132648  387181 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0916 17:12:27.132665  387181 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0916 17:12:27.132712  387181 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0916 17:12:27.141331  387181 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0916 17:12:27.141497  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3672942575 /etc/docker/daemon.json
	I0916 17:12:27.149896  387181 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 17:12:27.383631  387181 exec_runner.go:51] Run: sudo systemctl restart docker
	I0916 17:12:27.673710  387181 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 17:12:27.684894  387181 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0916 17:12:27.701621  387181 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 17:12:27.713219  387181 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0916 17:12:27.946036  387181 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0916 17:12:28.208747  387181 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 17:12:28.445493  387181 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0916 17:12:28.459747  387181 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 17:12:28.470378  387181 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 17:12:28.690634  387181 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0916 17:12:28.762667  387181 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 17:12:28.762775  387181 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0916 17:12:28.764249  387181 start.go:563] Will wait 60s for crictl version
	I0916 17:12:28.764323  387181 exec_runner.go:51] Run: which crictl
	I0916 17:12:28.765375  387181 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0916 17:12:28.797902  387181 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 17:12:28.797965  387181 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 17:12:28.819876  387181 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 17:12:28.843525  387181 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 17:12:28.843630  387181 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0916 17:12:28.846535  387181 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0916 17:12:28.847861  387181 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.164.0.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 17:12:28.847973  387181 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 17:12:28.847984  387181 kubeadm.go:934] updating node { 10.164.0.3 8443 v1.31.1 docker true true} ...
	I0916 17:12:28.848073  387181 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-11 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.164.0.3 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0916 17:12:28.848115  387181 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0916 17:12:28.899316  387181 cni.go:84] Creating CNI manager for ""
	I0916 17:12:28.899349  387181 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 17:12:28.899362  387181 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 17:12:28.899385  387181 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.164.0.3 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-11 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.164.0.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.164.0.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 17:12:28.899543  387181 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.164.0.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-11"
	  kubeletExtraArgs:
	    node-ip: 10.164.0.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.164.0.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 17:12:28.899633  387181 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 17:12:28.908088  387181 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 17:12:28.908145  387181 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 17:12:28.916057  387181 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 17:12:28.916101  387181 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 17:12:28.916105  387181 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 17:12:28.916157  387181 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 17:12:28.916159  387181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-376650/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 17:12:28.916171  387181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-376650/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 17:12:28.927268  387181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-376650/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 17:12:28.970443  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1988923403 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 17:12:28.972446  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3723771488 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 17:12:28.997848  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1297228915 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 17:12:29.064756  387181 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 17:12:29.074002  387181 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0916 17:12:29.074025  387181 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 17:12:29.074064  387181 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 17:12:29.081592  387181 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0916 17:12:29.081769  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2948601568 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0916 17:12:29.089692  387181 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0916 17:12:29.089715  387181 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0916 17:12:29.089766  387181 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0916 17:12:29.097298  387181 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 17:12:29.097443  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3392541889 /lib/systemd/system/kubelet.service
	I0916 17:12:29.106246  387181 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0916 17:12:29.106366  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1797442899 /var/tmp/minikube/kubeadm.yaml.new
	I0916 17:12:29.114197  387181 exec_runner.go:51] Run: grep 10.164.0.3	control-plane.minikube.internal$ /etc/hosts
	I0916 17:12:29.115537  387181 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 17:12:29.340748  387181 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 17:12:29.357713  387181 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube for IP: 10.164.0.3
	I0916 17:12:29.357737  387181 certs.go:194] generating shared ca certs ...
	I0916 17:12:29.357757  387181 certs.go:226] acquiring lock for ca certs: {Name:mk20a587b261fffb9a6ea6e24c0b104ff46ab50d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:12:29.357914  387181 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-376650/.minikube/ca.key
	I0916 17:12:29.357970  387181 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-376650/.minikube/proxy-client-ca.key
	I0916 17:12:29.357983  387181 certs.go:256] generating profile certs ...
	I0916 17:12:29.358054  387181 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/client.key
	I0916 17:12:29.358080  387181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/client.crt with IP's: []
	I0916 17:12:29.523250  387181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/client.crt ...
	I0916 17:12:29.523287  387181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/client.crt: {Name:mk156820774bbb3e7c9fd156b7c2d3d786058a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:12:29.523468  387181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/client.key ...
	I0916 17:12:29.523483  387181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/client.key: {Name:mka7fcd2460c60d8dc92fcbb9e22bb2270ad14fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:12:29.523576  387181 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/apiserver.key.3de5026c
	I0916 17:12:29.523596  387181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/apiserver.crt.3de5026c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.164.0.3]
	I0916 17:12:29.882337  387181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/apiserver.crt.3de5026c ...
	I0916 17:12:29.882374  387181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/apiserver.crt.3de5026c: {Name:mk49179263f081622b1edc30530a439849796873 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:12:29.882539  387181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/apiserver.key.3de5026c ...
	I0916 17:12:29.882556  387181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/apiserver.key.3de5026c: {Name:mk46a60fd46f0dd9cb1e7ea5b30f4c1a5265e5af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:12:29.882639  387181 certs.go:381] copying /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/apiserver.crt.3de5026c -> /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/apiserver.crt
	I0916 17:12:29.882763  387181 certs.go:385] copying /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/apiserver.key.3de5026c -> /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/apiserver.key
	I0916 17:12:29.882857  387181 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/proxy-client.key
	I0916 17:12:29.882883  387181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0916 17:12:30.117130  387181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/proxy-client.crt ...
	I0916 17:12:30.117168  387181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/proxy-client.crt: {Name:mk0e9a7c31b0d97f8cc078c2310d7f92f8a198b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:12:30.117337  387181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/proxy-client.key ...
	I0916 17:12:30.117352  387181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/proxy-client.key: {Name:mk3138a38ddef9abf5a7dafdcaaacbd56fab7852 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:12:30.117613  387181 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-376650/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 17:12:30.117665  387181 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-376650/.minikube/certs/ca.pem (1082 bytes)
	I0916 17:12:30.117704  387181 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-376650/.minikube/certs/cert.pem (1123 bytes)
	I0916 17:12:30.117747  387181 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-376650/.minikube/certs/key.pem (1679 bytes)
	I0916 17:12:30.118385  387181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-376650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 17:12:30.118525  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1072983174 /var/lib/minikube/certs/ca.crt
	I0916 17:12:30.128241  387181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-376650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 17:12:30.128382  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube811942237 /var/lib/minikube/certs/ca.key
	I0916 17:12:30.137310  387181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-376650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 17:12:30.137495  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1004799573 /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 17:12:30.145826  387181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-376650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 17:12:30.145963  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2234755043 /var/lib/minikube/certs/proxy-client-ca.key
	I0916 17:12:30.155443  387181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0916 17:12:30.155591  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube170753872 /var/lib/minikube/certs/apiserver.crt
	I0916 17:12:30.164938  387181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 17:12:30.165149  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1431772589 /var/lib/minikube/certs/apiserver.key
	I0916 17:12:30.174476  387181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 17:12:30.174643  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3965886181 /var/lib/minikube/certs/proxy-client.crt
	I0916 17:12:30.183560  387181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 17:12:30.183683  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2410991611 /var/lib/minikube/certs/proxy-client.key
	I0916 17:12:30.191558  387181 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0916 17:12:30.191593  387181 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0916 17:12:30.191632  387181 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0916 17:12:30.198939  387181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-376650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 17:12:30.199103  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube795856199 /usr/share/ca-certificates/minikubeCA.pem
	I0916 17:12:30.207238  387181 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 17:12:30.207363  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube438891213 /var/lib/minikube/kubeconfig
	I0916 17:12:30.214899  387181 exec_runner.go:51] Run: openssl version
	I0916 17:12:30.217728  387181 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 17:12:30.226084  387181 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 17:12:30.227456  387181 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 16 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0916 17:12:30.227508  387181 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 17:12:30.230230  387181 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 17:12:30.237625  387181 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 17:12:30.238728  387181 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 17:12:30.238778  387181 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.164.0.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:12:30.238885  387181 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 17:12:30.254334  387181 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 17:12:30.263540  387181 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 17:12:30.272529  387181 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0916 17:12:30.293763  387181 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 17:12:30.302673  387181 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 17:12:30.302698  387181 kubeadm.go:157] found existing configuration files:
	
	I0916 17:12:30.302748  387181 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 17:12:30.310401  387181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 17:12:30.310449  387181 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 17:12:30.317815  387181 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 17:12:30.325503  387181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 17:12:30.325551  387181 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 17:12:30.334068  387181 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 17:12:30.341932  387181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 17:12:30.341984  387181 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 17:12:30.349073  387181 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 17:12:30.356552  387181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 17:12:30.356600  387181 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 17:12:30.364136  387181 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 17:12:30.396640  387181 kubeadm.go:310] W0916 17:12:30.396529  388068 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 17:12:30.397181  387181 kubeadm.go:310] W0916 17:12:30.397144  388068 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 17:12:30.399265  387181 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 17:12:30.399699  387181 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 17:12:30.486814  387181 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 17:12:30.486984  387181 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 17:12:30.487004  387181 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 17:12:30.487011  387181 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 17:12:30.497198  387181 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 17:12:30.500188  387181 out.go:235]   - Generating certificates and keys ...
	I0916 17:12:30.500229  387181 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 17:12:30.500239  387181 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 17:12:30.662920  387181 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 17:12:30.738725  387181 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 17:12:30.921964  387181 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 17:12:31.099320  387181 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 17:12:31.423315  387181 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 17:12:31.423344  387181 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-11] and IPs [10.164.0.3 127.0.0.1 ::1]
	I0916 17:12:31.703234  387181 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 17:12:31.703359  387181 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-11] and IPs [10.164.0.3 127.0.0.1 ::1]
	I0916 17:12:31.772510  387181 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 17:12:32.000330  387181 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 17:12:32.155636  387181 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 17:12:32.155827  387181 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 17:12:32.446789  387181 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 17:12:32.665185  387181 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 17:12:32.939867  387181 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 17:12:33.045321  387181 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 17:12:33.212377  387181 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 17:12:33.212911  387181 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 17:12:33.215186  387181 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 17:12:33.218394  387181 out.go:235]   - Booting up control plane ...
	I0916 17:12:33.218424  387181 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 17:12:33.218446  387181 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 17:12:33.218453  387181 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 17:12:33.234606  387181 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 17:12:33.238903  387181 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 17:12:33.238923  387181 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 17:12:33.466573  387181 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 17:12:33.466598  387181 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 17:12:33.968467  387181 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.849147ms
	I0916 17:12:33.968500  387181 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 17:12:37.969899  387181 kubeadm.go:310] [api-check] The API server is healthy after 4.001420284s
	I0916 17:12:37.981246  387181 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 17:12:37.992191  387181 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 17:12:38.012025  387181 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 17:12:38.012163  387181 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-11 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 17:12:38.020122  387181 kubeadm.go:310] [bootstrap-token] Using token: karooy.f8eiav3wuvys0m49
	I0916 17:12:38.021417  387181 out.go:235]   - Configuring RBAC rules ...
	I0916 17:12:38.021450  387181 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 17:12:38.024770  387181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 17:12:38.030106  387181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 17:12:38.032636  387181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 17:12:38.034967  387181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 17:12:38.038201  387181 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 17:12:38.375721  387181 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 17:12:38.800382  387181 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 17:12:39.375977  387181 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 17:12:39.377145  387181 kubeadm.go:310] 
	I0916 17:12:39.377163  387181 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 17:12:39.377169  387181 kubeadm.go:310] 
	I0916 17:12:39.377173  387181 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 17:12:39.377177  387181 kubeadm.go:310] 
	I0916 17:12:39.377181  387181 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 17:12:39.377186  387181 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 17:12:39.377190  387181 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 17:12:39.377201  387181 kubeadm.go:310] 
	I0916 17:12:39.377206  387181 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 17:12:39.377217  387181 kubeadm.go:310] 
	I0916 17:12:39.377222  387181 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 17:12:39.377226  387181 kubeadm.go:310] 
	I0916 17:12:39.377230  387181 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 17:12:39.377234  387181 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 17:12:39.377238  387181 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 17:12:39.377242  387181 kubeadm.go:310] 
	I0916 17:12:39.377246  387181 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 17:12:39.377251  387181 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 17:12:39.377254  387181 kubeadm.go:310] 
	I0916 17:12:39.377258  387181 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token karooy.f8eiav3wuvys0m49 \
	I0916 17:12:39.377263  387181 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:510ec39f3aad2dab83e5c78ed3745d69ae17e7517fefbab9edc1f42fe22d2d74 \
	I0916 17:12:39.377269  387181 kubeadm.go:310] 	--control-plane 
	I0916 17:12:39.377273  387181 kubeadm.go:310] 
	I0916 17:12:39.377277  387181 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 17:12:39.377280  387181 kubeadm.go:310] 
	I0916 17:12:39.377283  387181 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token karooy.f8eiav3wuvys0m49 \
	I0916 17:12:39.377286  387181 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:510ec39f3aad2dab83e5c78ed3745d69ae17e7517fefbab9edc1f42fe22d2d74 
	I0916 17:12:39.380470  387181 cni.go:84] Creating CNI manager for ""
	I0916 17:12:39.380499  387181 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 17:12:39.382224  387181 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 17:12:39.383619  387181 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0916 17:12:39.394430  387181 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 17:12:39.394573  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1902701000 /etc/cni/net.d/1-k8s.conflist
	I0916 17:12:39.404484  387181 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 17:12:39.404582  387181 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:12:39.404588  387181 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-11 minikube.k8s.io/updated_at=2024_09_16T17_12_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0916 17:12:39.413593  387181 ops.go:34] apiserver oom_adj: -16
	I0916 17:12:39.475578  387181 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:12:39.976153  387181 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:12:40.475808  387181 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:12:40.975788  387181 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:12:41.476160  387181 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:12:41.975914  387181 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:12:42.475794  387181 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:12:42.975844  387181 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:12:43.476368  387181 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:12:43.975620  387181 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:12:44.476164  387181 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:12:44.545674  387181 kubeadm.go:1113] duration metric: took 5.141169687s to wait for elevateKubeSystemPrivileges
	I0916 17:12:44.545719  387181 kubeadm.go:394] duration metric: took 14.306956497s to StartCluster
	I0916 17:12:44.545796  387181 settings.go:142] acquiring lock: {Name:mk440b97ca940980815dccc61fcc0525e80969ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:12:44.545876  387181 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19649-376650/kubeconfig
	I0916 17:12:44.546561  387181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-376650/kubeconfig: {Name:mk992013e6613a596ea1c6e2dc3798ecdc195f7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:12:44.546799  387181 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 17:12:44.546787  387181 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 17:12:44.546966  387181 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0916 17:12:44.546966  387181 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0916 17:12:44.546994  387181 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0916 17:12:44.547008  387181 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0916 17:12:44.547017  387181 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0916 17:12:44.547020  387181 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0916 17:12:44.547030  387181 addons.go:69] Setting volcano=true in profile "minikube"
	I0916 17:12:44.547041  387181 addons.go:234] Setting addon volcano=true in "minikube"
	I0916 17:12:44.547046  387181 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0916 17:12:44.547055  387181 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0916 17:12:44.547057  387181 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0916 17:12:44.547051  387181 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0916 17:12:44.547065  387181 mustload.go:65] Loading cluster: minikube
	I0916 17:12:44.547066  387181 host.go:66] Checking if "minikube" exists ...
	I0916 17:12:44.547069  387181 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0916 17:12:44.547070  387181 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0916 17:12:44.547084  387181 host.go:66] Checking if "minikube" exists ...
	I0916 17:12:44.547095  387181 host.go:66] Checking if "minikube" exists ...
	I0916 17:12:44.547101  387181 host.go:66] Checking if "minikube" exists ...
	I0916 17:12:44.547101  387181 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:12:44.547004  387181 addons.go:69] Setting yakd=true in profile "minikube"
	I0916 17:12:44.547253  387181 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:12:44.547048  387181 host.go:66] Checking if "minikube" exists ...
	I0916 17:12:44.547255  387181 addons.go:234] Setting addon yakd=true in "minikube"
	I0916 17:12:44.547414  387181 host.go:66] Checking if "minikube" exists ...
	I0916 17:12:44.547041  387181 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0916 17:12:44.547677  387181 host.go:66] Checking if "minikube" exists ...
	I0916 17:12:44.547002  387181 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0916 17:12:44.547722  387181 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0916 17:12:44.547748  387181 host.go:66] Checking if "minikube" exists ...
	I0916 17:12:44.547865  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.547880  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.547878  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.547895  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.547900  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.547914  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.547918  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:44.547927  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:44.547945  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:44.548090  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.548104  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.548134  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:44.548136  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.548151  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.548184  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:44.548311  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.548325  387181 out.go:177] * Configuring local host environment ...
	I0916 17:12:44.548329  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.547019  387181 addons.go:69] Setting registry=true in profile "minikube"
	I0916 17:12:44.548704  387181 addons.go:234] Setting addon registry=true in "minikube"
	I0916 17:12:44.548708  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.548722  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.548742  387181 host.go:66] Checking if "minikube" exists ...
	I0916 17:12:44.548752  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:44.548766  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.548779  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.548807  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:44.548363  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:44.549219  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.549238  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.549268  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0916 17:12:44.550525  387181 out.go:270] * 
	W0916 17:12:44.550544  387181 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0916 17:12:44.550552  387181 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0916 17:12:44.550558  387181 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0916 17:12:44.550572  387181 out.go:270] * 
	W0916 17:12:44.550629  387181 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0916 17:12:44.550637  387181 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0916 17:12:44.550644  387181 out.go:270] * 
	W0916 17:12:44.550670  387181 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0916 17:12:44.550678  387181 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0916 17:12:44.550684  387181 out.go:270] * 
	W0916 17:12:44.550690  387181 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0916 17:12:44.550718  387181 start.go:235] Will wait 6m0s for node &{Name: IP:10.164.0.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 17:12:44.551134  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.551153  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.551183  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:44.546994  387181 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0916 17:12:44.551811  387181 host.go:66] Checking if "minikube" exists ...
	I0916 17:12:44.546984  387181 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0916 17:12:44.552040  387181 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0916 17:12:44.547039  387181 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0916 17:12:44.547009  387181 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0916 17:12:44.552215  387181 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0916 17:12:44.552242  387181 host.go:66] Checking if "minikube" exists ...
	I0916 17:12:44.552758  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.552777  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.552813  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:44.552985  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.553002  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.553034  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:44.553084  387181 out.go:177] * Verifying Kubernetes components...
	I0916 17:12:44.557794  387181 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0916 17:12:44.571528  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.573295  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.573325  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.573367  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:44.573746  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.573863  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.573402  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.573971  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.574008  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:44.586721  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.590853  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.596411  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.605688  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.608830  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.613506  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.614371  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.614435  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.616161  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.616202  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.620382  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.620446  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.623107  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.629966  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.630038  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.631547  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.634419  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.634484  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.634818  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.634849  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.636202  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.638155  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.639118  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.639792  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.642129  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.642151  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.642611  387181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 17:12:44.644078  387181 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 17:12:44.645208  387181 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 17:12:44.647040  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.647070  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.647676  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.647922  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.648164  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.648572  387181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 17:12:44.649975  387181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 17:12:44.651140  387181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 17:12:44.653185  387181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 17:12:44.653282  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.653345  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.653376  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.653403  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.653550  387181 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 17:12:44.654550  387181 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 17:12:44.654566  387181 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 17:12:44.654791  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.656142  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.656308  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.656352  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.656468  387181 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 17:12:44.656569  387181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 17:12:44.656593  387181 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 17:12:44.656724  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2584470291 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 17:12:44.657351  387181 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 17:12:44.657385  387181 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 17:12:44.657844  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.657879  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2581535404 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 17:12:44.657893  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.658168  387181 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 17:12:44.658169  387181 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 17:12:44.658187  387181 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 17:12:44.658197  387181 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 17:12:44.658302  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube339522796 /etc/kubernetes/addons/yakd-ns.yaml
	I0916 17:12:44.658331  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1522455149 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 17:12:44.664793  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.665025  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.665948  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.665972  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.666758  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.666819  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.674527  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.674606  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.676066  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.676395  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.676422  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.679567  387181 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 17:12:44.680900  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.681201  387181 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 17:12:44.681235  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 17:12:44.681246  387181 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 17:12:44.681270  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 17:12:44.681384  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1958813175 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 17:12:44.681393  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1046466543 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 17:12:44.684022  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.684049  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.684928  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.684951  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.686724  387181 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 17:12:44.688096  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.688117  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.688134  387181 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 17:12:44.688158  387181 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 17:12:44.688294  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2054122008 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 17:12:44.691874  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.692168  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.693367  387181 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 17:12:44.693582  387181 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0916 17:12:44.693620  387181 host.go:66] Checking if "minikube" exists ...
	I0916 17:12:44.694286  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.694302  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.694342  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:44.694600  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.694625  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.694902  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.695076  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.695395  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.695558  387181 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 17:12:44.695703  387181 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 17:12:44.695958  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 17:12:44.696125  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2323299116 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 17:12:44.696308  387181 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 17:12:44.697782  387181 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 17:12:44.697836  387181 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 17:12:44.697859  387181 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 17:12:44.697997  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3165885539 /etc/kubernetes/addons/ig-namespace.yaml
	I0916 17:12:44.699062  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.700297  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.700311  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.700554  387181 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 17:12:44.700582  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 17:12:44.700593  387181 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 17:12:44.701179  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3569767812 /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 17:12:44.702818  387181 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 17:12:44.702855  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 17:12:44.703081  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3083463317 /etc/kubernetes/addons/deployment.yaml
	I0916 17:12:44.704028  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.704199  387181 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 17:12:44.704218  387181 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 17:12:44.704336  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1586349650 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 17:12:44.704526  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.705761  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.705806  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.705944  387181 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 17:12:44.705993  387181 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 17:12:44.707294  387181 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 17:12:44.707318  387181 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0916 17:12:44.707325  387181 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 17:12:44.707361  387181 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 17:12:44.708742  387181 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 17:12:44.709901  387181 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 17:12:44.709924  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 17:12:44.710038  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1854035903 /etc/kubernetes/addons/registry-rc.yaml
	I0916 17:12:44.714050  387181 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 17:12:44.714070  387181 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 17:12:44.714187  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube56806178 /etc/kubernetes/addons/yakd-sa.yaml
	I0916 17:12:44.718894  387181 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 17:12:44.718927  387181 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 17:12:44.719076  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4133093361 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 17:12:44.723511  387181 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 17:12:44.723765  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.723788  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.727317  387181 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 17:12:44.728136  387181 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 17:12:44.728420  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3907254135 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 17:12:44.729647  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.730700  387181 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0916 17:12:44.730744  387181 host.go:66] Checking if "minikube" exists ...
	I0916 17:12:44.731541  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:44.731558  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:44.731588  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:44.731711  387181 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 17:12:44.740447  387181 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 17:12:44.742717  387181 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 17:12:44.742748  387181 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 17:12:44.742882  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2533144538 /etc/kubernetes/addons/registry-svc.yaml
	I0916 17:12:44.743046  387181 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 17:12:44.743069  387181 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 17:12:44.743170  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3171204960 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 17:12:44.743828  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.743904  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.759357  387181 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 17:12:44.769786  387181 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 17:12:44.770129  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3973461906 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 17:12:44.770334  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 17:12:44.770464  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1507043618 /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 17:12:44.770851  387181 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 17:12:44.770875  387181 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 17:12:44.771071  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2711615753 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 17:12:44.771135  387181 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 17:12:44.771158  387181 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 17:12:44.771280  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube829320609 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 17:12:44.772245  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.776289  387181 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 17:12:44.776339  387181 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 17:12:44.776507  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3828469207 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 17:12:44.776962  387181 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 17:12:44.776982  387181 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 17:12:44.777073  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3770312797 /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 17:12:44.777510  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.777539  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.781600  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:44.781998  387181 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 17:12:44.782034  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 17:12:44.782160  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3282421394 /etc/kubernetes/addons/registry-proxy.yaml
	I0916 17:12:44.782988  387181 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 17:12:44.783016  387181 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 17:12:44.783156  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2914848902 /etc/kubernetes/addons/yakd-crb.yaml
	I0916 17:12:44.784853  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.784881  387181 host.go:66] Checking if "minikube" exists ...
	I0916 17:12:44.789200  387181 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 17:12:44.789222  387181 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 17:12:44.789403  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1523518950 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 17:12:44.791400  387181 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 17:12:44.796470  387181 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 17:12:44.798724  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.798788  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.801735  387181 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 17:12:44.807464  387181 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 17:12:44.807496  387181 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 17:12:44.807623  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3562293486 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 17:12:44.810472  387181 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 17:12:44.828324  387181 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 17:12:44.828362  387181 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 17:12:44.828599  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube679042247 /etc/kubernetes/addons/yakd-svc.yaml
	I0916 17:12:44.829193  387181 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 17:12:44.829226  387181 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 17:12:44.829302  387181 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 17:12:44.829348  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube621591172 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 17:12:44.843088  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:44.843222  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:44.856636  387181 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 17:12:44.856673  387181 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 17:12:44.856852  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1510412258 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 17:12:44.872971  387181 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 17:12:44.873010  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 17:12:44.873162  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3847989064 /etc/kubernetes/addons/yakd-dp.yaml
	I0916 17:12:44.875787  387181 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 17:12:44.875818  387181 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 17:12:44.875951  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2389281895 /etc/kubernetes/addons/ig-role.yaml
	I0916 17:12:44.878337  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.878377  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.885144  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.885199  387181 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 17:12:44.885220  387181 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0916 17:12:44.885229  387181 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0916 17:12:44.885277  387181 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0916 17:12:44.887779  387181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 17:12:44.887823  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 17:12:44.887957  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2613906176 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 17:12:44.899274  387181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 17:12:44.899309  387181 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 17:12:44.899444  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3804465456 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 17:12:44.921915  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:44.921983  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:44.928423  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:44.930609  387181 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 17:12:44.932187  387181 out.go:177]   - Using image docker.io/busybox:stable
	I0916 17:12:44.933357  387181 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 17:12:44.933397  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 17:12:44.933558  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1677248607 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 17:12:44.960527  387181 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 17:12:44.960888  387181 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 17:12:44.960921  387181 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 17:12:44.961053  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1639489618 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 17:12:44.965732  387181 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 17:12:44.965768  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 17:12:44.965918  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3306658956 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 17:12:44.981657  387181 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 17:12:44.981694  387181 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 17:12:44.981823  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3544363481 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 17:12:44.996892  387181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 17:12:44.996934  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 17:12:44.997351  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube942872623 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 17:12:44.997857  387181 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 17:12:44.998006  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2100522188 /etc/kubernetes/addons/storageclass.yaml
	I0916 17:12:45.001918  387181 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 17:12:45.017218  387181 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 17:12:45.017272  387181 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 17:12:45.017421  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4292123711 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 17:12:45.020167  387181 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 17:12:45.050009  387181 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 17:12:45.061635  387181 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 17:12:45.061675  387181 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 17:12:45.061846  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4058862643 /etc/kubernetes/addons/ig-crd.yaml
	I0916 17:12:45.082270  387181 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0916 17:12:45.099513  387181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 17:12:45.099553  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 17:12:45.099708  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube393881643 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 17:12:45.133945  387181 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 17:12:45.133990  387181 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 17:12:45.134158  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1757001572 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 17:12:45.140715  387181 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 17:12:45.140751  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 17:12:45.140907  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3119904194 /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 17:12:45.172542  387181 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-11" to be "Ready" ...
	I0916 17:12:45.178736  387181 node_ready.go:49] node "ubuntu-20-agent-11" has status "Ready":"True"
	I0916 17:12:45.178772  387181 node_ready.go:38] duration metric: took 6.19609ms for node "ubuntu-20-agent-11" to be "Ready" ...
	I0916 17:12:45.178786  387181 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 17:12:45.192740  387181 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6lq5l" in "kube-system" namespace to be "Ready" ...
	I0916 17:12:45.209797  387181 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 17:12:45.228449  387181 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 17:12:45.625898  387181 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0916 17:12:46.061463  387181 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.232113213s)
	I0916 17:12:46.061505  387181 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0916 17:12:46.099195  387181 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.302657869s)
	I0916 17:12:46.112203  387181 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.32074973s)
	I0916 17:12:46.112255  387181 addons.go:475] Verifying addon registry=true in "minikube"
	I0916 17:12:46.115434  387181 out.go:177] * Verifying registry addon...
	I0916 17:12:46.119395  387181 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 17:12:46.125722  387181 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 17:12:46.125748  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:46.131239  387181 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0916 17:12:46.238262  387181 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.277664975s)
	I0916 17:12:46.239594  387181 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0916 17:12:46.521447  387181 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.292938934s)
	I0916 17:12:46.675937  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:46.800724  387181 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.78049897s)
	W0916 17:12:46.800769  387181 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 17:12:46.800797  387181 retry.go:31] will retry after 323.709634ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 17:12:46.809191  387181 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.80722795s)
	I0916 17:12:47.124974  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:47.125074  387181 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 17:12:47.199077  387181 pod_ready.go:103] pod "coredns-7c65d6cfc9-6lq5l" in "kube-system" namespace has status "Ready":"False"
	I0916 17:12:47.628963  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:47.831709  387181 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.621837842s)
	I0916 17:12:47.831754  387181 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0916 17:12:47.834735  387181 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 17:12:47.838607  387181 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 17:12:47.846235  387181 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 17:12:47.846261  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:47.873215  387181 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.132709529s)
	I0916 17:12:48.123596  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:48.344509  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:48.622803  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:48.843816  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:49.123828  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:49.343604  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:49.624361  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:49.699247  387181 pod_ready.go:103] pod "coredns-7c65d6cfc9-6lq5l" in "kube-system" namespace has status "Ready":"False"
	I0916 17:12:49.843675  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:50.047734  387181 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.922620938s)
	I0916 17:12:50.123888  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:50.343990  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:50.623747  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:50.844669  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:51.124563  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:51.343461  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:51.624772  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:51.799893  387181 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 17:12:51.800071  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2719304585 /var/lib/minikube/google_application_credentials.json
	I0916 17:12:51.843483  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:51.847720  387181 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 17:12:51.847925  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4154576172 /var/lib/minikube/google_cloud_project
	I0916 17:12:51.864585  387181 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0916 17:12:51.864687  387181 host.go:66] Checking if "minikube" exists ...
	I0916 17:12:51.865252  387181 kubeconfig.go:125] found "minikube" server: "https://10.164.0.3:8443"
	I0916 17:12:51.865272  387181 api_server.go:166] Checking apiserver status ...
	I0916 17:12:51.865317  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:51.883541  387181 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/388506/cgroup
	I0916 17:12:51.892997  387181 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0"
	I0916 17:12:51.893056  387181 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd4b6dbb66b387b463544a6ee76f2a570/e84006747215dccadbcfb972daf1eb51a1e857cc4456bc0b13ebe00e107094d0/freezer.state
	I0916 17:12:51.901666  387181 api_server.go:204] freezer state: "THAWED"
	I0916 17:12:51.901693  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:51.905357  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:51.905407  387181 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 17:12:51.972402  387181 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 17:12:52.042179  387181 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 17:12:52.104832  387181 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 17:12:52.104907  387181 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 17:12:52.105820  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2677401735 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 17:12:52.117003  387181 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 17:12:52.117041  387181 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 17:12:52.117184  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube628466629 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 17:12:52.123815  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:52.146477  387181 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 17:12:52.146519  387181 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 17:12:52.146681  387181 exec_runner.go:51] Run: sudo cp -a /tmp/minikube558029850 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 17:12:52.199536  387181 pod_ready.go:103] pod "coredns-7c65d6cfc9-6lq5l" in "kube-system" namespace has status "Ready":"False"
	I0916 17:12:52.220897  387181 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 17:12:52.343599  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:52.633268  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:52.698859  387181 pod_ready.go:98] pod "coredns-7c65d6cfc9-6lq5l" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:12:52 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:12:44 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:12:44 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:12:44 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:12:44 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.164.0.3 HostIPs:[{IP:10.164.0.3}] P
odIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-16 17:12:44 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 17:12:45 +0000 UTC,FinishedAt:2024-09-16 17:12:51 +0000 UTC,ContainerID:docker://5f597b6f33a62399e14f10c51c9f2159011a6f77657645fde63b97f20bc7b6d5,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://5f597b6f33a62399e14f10c51c9f2159011a6f77657645fde63b97f20bc7b6d5 Started:0xc002758100 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002361530} {Name:kube-api-access-4rs8h MountPath:/var/run/secrets/kubernetes.io/serviceaccount Rea
dOnly:true RecursiveReadOnly:0xc002361540}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 17:12:52.698891  387181 pod_ready.go:82] duration metric: took 7.506043426s for pod "coredns-7c65d6cfc9-6lq5l" in "kube-system" namespace to be "Ready" ...
	E0916 17:12:52.698905  387181 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-6lq5l" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:12:52 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:12:44 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:12:44 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:12:44 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:12:44 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.164.0.
3 HostIPs:[{IP:10.164.0.3}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-16 17:12:44 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 17:12:45 +0000 UTC,FinishedAt:2024-09-16 17:12:51 +0000 UTC,ContainerID:docker://5f597b6f33a62399e14f10c51c9f2159011a6f77657645fde63b97f20bc7b6d5,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://5f597b6f33a62399e14f10c51c9f2159011a6f77657645fde63b97f20bc7b6d5 Started:0xc002758100 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002361530} {Name:kube-api-access-4rs8h MountPath:/var/run/secrets/kub
ernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002361540}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 17:12:52.698917  387181 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gncvl" in "kube-system" namespace to be "Ready" ...
	I0916 17:12:52.704116  387181 pod_ready.go:93] pod "coredns-7c65d6cfc9-gncvl" in "kube-system" namespace has status "Ready":"True"
	I0916 17:12:52.704145  387181 pod_ready.go:82] duration metric: took 5.217534ms for pod "coredns-7c65d6cfc9-gncvl" in "kube-system" namespace to be "Ready" ...
	I0916 17:12:52.704157  387181 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-11" in "kube-system" namespace to be "Ready" ...
	I0916 17:12:52.709882  387181 pod_ready.go:93] pod "etcd-ubuntu-20-agent-11" in "kube-system" namespace has status "Ready":"True"
	I0916 17:12:52.709907  387181 pod_ready.go:82] duration metric: took 5.741151ms for pod "etcd-ubuntu-20-agent-11" in "kube-system" namespace to be "Ready" ...
	I0916 17:12:52.709919  387181 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-11" in "kube-system" namespace to be "Ready" ...
	I0916 17:12:52.715564  387181 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-11" in "kube-system" namespace has status "Ready":"True"
	I0916 17:12:52.715594  387181 pod_ready.go:82] duration metric: took 5.666276ms for pod "kube-apiserver-ubuntu-20-agent-11" in "kube-system" namespace to be "Ready" ...
	I0916 17:12:52.715607  387181 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-11" in "kube-system" namespace to be "Ready" ...
	I0916 17:12:52.723621  387181 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-11" in "kube-system" namespace has status "Ready":"True"
	I0916 17:12:52.723647  387181 pod_ready.go:82] duration metric: took 8.031883ms for pod "kube-controller-manager-ubuntu-20-agent-11" in "kube-system" namespace to be "Ready" ...
	I0916 17:12:52.723660  387181 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-68zv9" in "kube-system" namespace to be "Ready" ...
	I0916 17:12:52.731399  387181 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0916 17:12:52.733014  387181 out.go:177] * Verifying gcp-auth addon...
	I0916 17:12:52.735465  387181 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 17:12:52.737813  387181 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 17:12:52.843734  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:53.096602  387181 pod_ready.go:93] pod "kube-proxy-68zv9" in "kube-system" namespace has status "Ready":"True"
	I0916 17:12:53.096628  387181 pod_ready.go:82] duration metric: took 372.95998ms for pod "kube-proxy-68zv9" in "kube-system" namespace to be "Ready" ...
	I0916 17:12:53.096640  387181 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-11" in "kube-system" namespace to be "Ready" ...
	I0916 17:12:53.123425  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:53.343110  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:53.497065  387181 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-11" in "kube-system" namespace has status "Ready":"True"
	I0916 17:12:53.497087  387181 pod_ready.go:82] duration metric: took 400.440308ms for pod "kube-scheduler-ubuntu-20-agent-11" in "kube-system" namespace to be "Ready" ...
	I0916 17:12:53.497097  387181 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4nk8v" in "kube-system" namespace to be "Ready" ...
	I0916 17:12:53.623800  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:53.843543  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:53.896708  387181 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4nk8v" in "kube-system" namespace has status "Ready":"True"
	I0916 17:12:53.896737  387181 pod_ready.go:82] duration metric: took 399.633185ms for pod "nvidia-device-plugin-daemonset-4nk8v" in "kube-system" namespace to be "Ready" ...
	I0916 17:12:53.896749  387181 pod_ready.go:39] duration metric: took 8.717948414s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 17:12:53.896768  387181 api_server.go:52] waiting for apiserver process to appear ...
	I0916 17:12:53.896821  387181 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:12:53.915401  387181 api_server.go:72] duration metric: took 9.364642888s to wait for apiserver process to appear ...
	I0916 17:12:53.915428  387181 api_server.go:88] waiting for apiserver healthz status ...
	I0916 17:12:53.915450  387181 api_server.go:253] Checking apiserver healthz at https://10.164.0.3:8443/healthz ...
	I0916 17:12:53.921030  387181 api_server.go:279] https://10.164.0.3:8443/healthz returned 200:
	ok
	I0916 17:12:53.922120  387181 api_server.go:141] control plane version: v1.31.1
	I0916 17:12:53.922144  387181 api_server.go:131] duration metric: took 6.708887ms to wait for apiserver health ...
	I0916 17:12:53.922153  387181 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 17:12:54.101528  387181 system_pods.go:59] 17 kube-system pods found
	I0916 17:12:54.101559  387181 system_pods.go:61] "coredns-7c65d6cfc9-gncvl" [b9ecc72a-3aae-4cff-a36f-cef0fa108f72] Running
	I0916 17:12:54.101568  387181 system_pods.go:61] "csi-hostpath-attacher-0" [2a9b1824-8c81-49ee-930c-c1764d860a2e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 17:12:54.101574  387181 system_pods.go:61] "csi-hostpath-resizer-0" [e29d3c3c-d141-48e7-b74b-bcd33228555a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 17:12:54.101583  387181 system_pods.go:61] "csi-hostpathplugin-8rwcc" [b828953e-b131-42cf-b76e-69369caf2d70] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 17:12:54.101590  387181 system_pods.go:61] "etcd-ubuntu-20-agent-11" [1b0a8d1a-2365-487c-b260-ac47dd49de7b] Running
	I0916 17:12:54.101596  387181 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-11" [1be6d684-4508-4e20-be5d-e0bae6f8d5d1] Running
	I0916 17:12:54.101602  387181 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-11" [367fa8d9-a21c-4bda-96a8-ff13ac35d1ff] Running
	I0916 17:12:54.101607  387181 system_pods.go:61] "kube-proxy-68zv9" [4f67a938-4ff9-4719-918c-a0405b5dc6ee] Running
	I0916 17:12:54.101611  387181 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-11" [00fd0941-19c0-4acc-80f2-8f82cca300c6] Running
	I0916 17:12:54.101618  387181 system_pods.go:61] "metrics-server-84c5f94fbc-5978t" [d701b5fa-ed17-4ee7-a966-09b2697eb07c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 17:12:54.101627  387181 system_pods.go:61] "nvidia-device-plugin-daemonset-4nk8v" [7670ea2f-0151-40da-99a4-799802b5cbf8] Running
	I0916 17:12:54.101638  387181 system_pods.go:61] "registry-66c9cd494c-cg447" [5f1f5d00-6103-4a68-904b-02c56427ec65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 17:12:54.101643  387181 system_pods.go:61] "registry-proxy-q8dsq" [0679d7d4-7310-4135-aee2-0654bc4e3e00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 17:12:54.101655  387181 system_pods.go:61] "snapshot-controller-56fcc65765-hmhfc" [afd3dd29-5f96-4506-baca-71e1e9545095] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 17:12:54.101664  387181 system_pods.go:61] "snapshot-controller-56fcc65765-r9klc" [81dbdd6e-5b6a-4d44-8c3c-94dfe5a7cc06] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 17:12:54.101668  387181 system_pods.go:61] "storage-provisioner" [37557198-8c0b-42ad-82c6-cb2855281c84] Running
	I0916 17:12:54.101673  387181 system_pods.go:61] "tiller-deploy-b48cc5f79-zkjcg" [0caef804-3403-4812-a38d-a32b22873751] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 17:12:54.101681  387181 system_pods.go:74] duration metric: took 179.521682ms to wait for pod list to return data ...
	I0916 17:12:54.101689  387181 default_sa.go:34] waiting for default service account to be created ...
	I0916 17:12:54.123211  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:54.296716  387181 default_sa.go:45] found service account: "default"
	I0916 17:12:54.296756  387181 default_sa.go:55] duration metric: took 195.05623ms for default service account to be created ...
	I0916 17:12:54.296768  387181 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 17:12:54.343702  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:54.502200  387181 system_pods.go:86] 17 kube-system pods found
	I0916 17:12:54.502234  387181 system_pods.go:89] "coredns-7c65d6cfc9-gncvl" [b9ecc72a-3aae-4cff-a36f-cef0fa108f72] Running
	I0916 17:12:54.502247  387181 system_pods.go:89] "csi-hostpath-attacher-0" [2a9b1824-8c81-49ee-930c-c1764d860a2e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 17:12:54.502255  387181 system_pods.go:89] "csi-hostpath-resizer-0" [e29d3c3c-d141-48e7-b74b-bcd33228555a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 17:12:54.502268  387181 system_pods.go:89] "csi-hostpathplugin-8rwcc" [b828953e-b131-42cf-b76e-69369caf2d70] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 17:12:54.502276  387181 system_pods.go:89] "etcd-ubuntu-20-agent-11" [1b0a8d1a-2365-487c-b260-ac47dd49de7b] Running
	I0916 17:12:54.502287  387181 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-11" [1be6d684-4508-4e20-be5d-e0bae6f8d5d1] Running
	I0916 17:12:54.502296  387181 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-11" [367fa8d9-a21c-4bda-96a8-ff13ac35d1ff] Running
	I0916 17:12:54.502305  387181 system_pods.go:89] "kube-proxy-68zv9" [4f67a938-4ff9-4719-918c-a0405b5dc6ee] Running
	I0916 17:12:54.502312  387181 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-11" [00fd0941-19c0-4acc-80f2-8f82cca300c6] Running
	I0916 17:12:54.502321  387181 system_pods.go:89] "metrics-server-84c5f94fbc-5978t" [d701b5fa-ed17-4ee7-a966-09b2697eb07c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 17:12:54.502330  387181 system_pods.go:89] "nvidia-device-plugin-daemonset-4nk8v" [7670ea2f-0151-40da-99a4-799802b5cbf8] Running
	I0916 17:12:54.502340  387181 system_pods.go:89] "registry-66c9cd494c-cg447" [5f1f5d00-6103-4a68-904b-02c56427ec65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 17:12:54.502351  387181 system_pods.go:89] "registry-proxy-q8dsq" [0679d7d4-7310-4135-aee2-0654bc4e3e00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 17:12:54.502361  387181 system_pods.go:89] "snapshot-controller-56fcc65765-hmhfc" [afd3dd29-5f96-4506-baca-71e1e9545095] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 17:12:54.502373  387181 system_pods.go:89] "snapshot-controller-56fcc65765-r9klc" [81dbdd6e-5b6a-4d44-8c3c-94dfe5a7cc06] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 17:12:54.502379  387181 system_pods.go:89] "storage-provisioner" [37557198-8c0b-42ad-82c6-cb2855281c84] Running
	I0916 17:12:54.502389  387181 system_pods.go:89] "tiller-deploy-b48cc5f79-zkjcg" [0caef804-3403-4812-a38d-a32b22873751] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 17:12:54.502401  387181 system_pods.go:126] duration metric: took 205.62551ms to wait for k8s-apps to be running ...
	I0916 17:12:54.502413  387181 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 17:12:54.502476  387181 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0916 17:12:54.517125  387181 system_svc.go:56] duration metric: took 14.698185ms WaitForService to wait for kubelet
	I0916 17:12:54.517156  387181 kubeadm.go:582] duration metric: took 9.96640746s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 17:12:54.517182  387181 node_conditions.go:102] verifying NodePressure condition ...
	I0916 17:12:54.623299  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:54.697367  387181 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 17:12:54.697402  387181 node_conditions.go:123] node cpu capacity is 8
	I0916 17:12:54.697417  387181 node_conditions.go:105] duration metric: took 180.228581ms to run NodePressure ...
	I0916 17:12:54.697432  387181 start.go:241] waiting for startup goroutines ...
	I0916 17:12:54.844005  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:55.123582  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:55.343234  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:55.622775  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:55.842730  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:56.123436  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:56.343049  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:56.623181  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:56.843291  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:57.123224  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:57.343813  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:57.623347  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:57.875150  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:58.123825  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:58.343599  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:58.623126  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:58.843757  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:59.125343  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:59.343681  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:59.623183  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:12:59.843584  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:00.123203  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:00.343712  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:00.622458  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:00.844205  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:01.123442  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:01.343682  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:01.623101  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:01.843517  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:02.123371  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:02.344211  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:02.623094  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:02.842772  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:03.123508  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:03.342649  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:03.623640  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:03.843511  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:04.123297  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:04.343316  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:04.623377  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:04.843538  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:05.123260  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:05.343667  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:05.623132  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:05.843254  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:06.124375  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:06.342734  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:06.623296  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:06.843175  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:07.123272  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:07.343660  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:07.623022  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:07.844262  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:08.123924  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:08.342508  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:08.622998  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:08.842626  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:09.123100  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:09.344145  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:09.623977  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:09.843648  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:10.123676  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:10.343132  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:10.623826  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:10.843755  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:11.122725  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:11.342377  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:11.622810  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:11.842886  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:12.123089  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:12.343765  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:12.623234  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:12.843840  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:13.122624  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:13.370021  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:13.623078  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:13.844524  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:14.124517  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:13:14.343919  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:14.622702  387181 kapi.go:107] duration metric: took 28.503308988s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 17:13:14.842817  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:15.343064  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:15.842731  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:16.343811  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:16.842320  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:17.342992  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:17.842649  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:18.343669  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:18.843250  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:19.343119  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:19.844038  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:20.343404  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:20.843412  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:21.343751  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:21.843629  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:22.342888  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:22.842576  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:23.342538  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:23.844645  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:24.343867  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:24.844000  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:25.342297  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:25.843588  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:26.344412  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:26.844495  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:27.343992  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:27.843342  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:28.343751  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:28.843332  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:29.343651  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:29.843987  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:30.344634  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:30.843318  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:31.343413  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:13:31.842700  387181 kapi.go:107] duration metric: took 44.004081441s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 17:14:14.738560  387181 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 17:14:14.738588  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:15.239461  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:15.739525  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:16.238638  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:16.739146  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:17.239258  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:17.739404  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:18.239851  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:18.739433  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:19.239402  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:19.738747  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:20.239396  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:20.738904  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:21.238627  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:21.738642  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:22.238621  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:22.739807  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:23.239020  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:23.739407  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:24.238737  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:24.739512  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:25.238491  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:25.739036  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:26.239372  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:26.739053  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:27.239242  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:27.739426  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:28.238676  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:28.738959  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:29.238876  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:29.739217  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:30.239042  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:30.739230  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:31.239840  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:31.739541  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:32.238699  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:32.738730  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:33.238790  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:33.738758  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:34.239242  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:34.739891  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:35.238940  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:35.739437  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:36.238866  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:36.738932  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:37.239289  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:37.739472  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:38.239072  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:38.739413  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:39.239393  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:39.738762  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:40.238979  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:40.739144  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:41.239216  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:41.739614  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:42.239269  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:42.739076  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:43.239068  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:43.739335  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:44.239843  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:44.739377  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:45.239540  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:45.739232  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:46.239394  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:46.739005  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:47.239402  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:47.739882  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:48.239164  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:48.739089  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:49.239661  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:49.739292  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:50.239046  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:50.739196  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:51.239475  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:51.738924  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:52.239112  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:52.739280  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:53.239050  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:53.739398  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:54.238706  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:54.738937  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:55.239494  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:55.738734  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:56.240055  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:56.739552  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:57.240161  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:57.739425  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:58.238571  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:58.738754  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:59.239134  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:14:59.739528  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:00.239100  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:00.739344  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:01.238499  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:01.739222  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:02.239702  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:02.739474  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:03.238685  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:03.739173  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:04.239591  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:04.739225  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:05.240009  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:05.739726  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:06.238476  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:06.738618  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:07.238596  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:07.739310  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:08.239328  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:08.739256  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:09.240402  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:09.739480  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:10.238989  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:10.738928  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:11.239303  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:11.739541  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:12.239207  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:12.739045  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:13.239406  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:13.739636  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:14.238732  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:14.738892  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:15.239233  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:15.739629  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:16.238790  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:16.738842  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:17.239042  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:17.739263  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:18.239116  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:18.739209  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:19.239376  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:19.739259  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:20.239740  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:20.738699  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:21.238741  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:21.739510  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:22.238517  387181 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:15:22.738880  387181 kapi.go:107] duration metric: took 2m30.003413074s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 17:15:22.740623  387181 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0916 17:15:22.741980  387181 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 17:15:22.743344  387181 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 17:15:22.744652  387181 out.go:177] * Enabled addons: nvidia-device-plugin, helm-tiller, default-storageclass, cloud-spanner, metrics-server, storage-provisioner, yakd, inspektor-gadget, storage-provisioner-rancher, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0916 17:15:22.745989  387181 addons.go:510] duration metric: took 2m38.199189238s for enable addons: enabled=[nvidia-device-plugin helm-tiller default-storageclass cloud-spanner metrics-server storage-provisioner yakd inspektor-gadget storage-provisioner-rancher volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0916 17:15:22.746028  387181 start.go:246] waiting for cluster config update ...
	I0916 17:15:22.746047  387181 start.go:255] writing updated cluster config ...
	I0916 17:15:22.746306  387181 exec_runner.go:51] Run: rm -f paused
	I0916 17:15:22.792569  387181 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0916 17:15:22.794523  387181 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Fri 2024-07-05 11:17:12 UTC, end at Mon 2024-09-16 17:25:15 UTC. --
	Sep 16 17:18:48 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:18:48.931400128Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 16 17:19:25 ubuntu-20-agent-11 cri-dockerd[387725]: time="2024-09-16T17:19:25Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 17:19:27 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:19:27.238630834Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 17:19:27 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:19:27.238624423Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 17:19:27 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:19:27.240554744Z" level=error msg="Error running exec 8b032328ea8abe6e7b9680eebc8c32a225eeb63a4387fdec3ffbecb932e9f2aa in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 17:19:27 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:19:27.438378587Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 17:19:27 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:19:27.438391493Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 16 17:19:27 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:19:27.438567754Z" level=error msg="Error running exec 325164b67f415ddc4301d8f9e0ed8e7a86c2de61aba684f980d9566448556a99 in container: cannot exec in a stopped state: unknown"
	Sep 16 17:19:27 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:19:27.445765536Z" level=info msg="ignoring event" container=9dc0d6c8913d2fd4c92b81f8bfea4e55b4b3a8b30baa499c668b79bc9ef19da3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:21:36 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:21:36.931448639Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 16 17:21:36 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:21:36.933576555Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 16 17:24:15 ubuntu-20-agent-11 cri-dockerd[387725]: time="2024-09-16T17:24:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6ec9ad61ae59cf0a4a6682875ade0ec685e697b0893c8964e87791de8be85605/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 16 17:24:16 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:24:16.319421752Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 16 17:24:16 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:24:16.321572377Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 16 17:24:27 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:24:27.935713101Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 16 17:24:27 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:24:27.937929986Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 16 17:24:29 ubuntu-20-agent-11 cri-dockerd[387725]: time="2024-09-16T17:24:29Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 16 17:24:31 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:24:31.410941466Z" level=info msg="ignoring event" container=8e04e4ecb3a90d66c959a600965ed4a074cdc587a311feff3d3c8860527ed201 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:24:55 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:24:55.944188707Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 16 17:24:55 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:24:55.946350128Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 16 17:25:14 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:25:14.772052045Z" level=info msg="ignoring event" container=6ec9ad61ae59cf0a4a6682875ade0ec685e697b0893c8964e87791de8be85605 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:25:15 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:25:15.041457374Z" level=info msg="ignoring event" container=1404182825d7458d2ed5c89f392a7adaf8dec5cd4e570e4ab63c9e39fb24c041 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:25:15 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:25:15.108090300Z" level=info msg="ignoring event" container=b27a8d882b33e1a1bf228872151d7c518611608507b2eb487725b652dd760ab0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:25:15 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:25:15.176775740Z" level=info msg="ignoring event" container=87e969c212d32328273fee12b3471c1c71f65d110fb97d178b96d04567d70cb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:25:15 ubuntu-20-agent-11 dockerd[387397]: time="2024-09-16T17:25:15.259939810Z" level=info msg="ignoring event" container=622ffc5cc5c8561e6d01e8c4398468265e2c8e928262ed7f08f626313922a485 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	8e04e4ecb3a90       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            46 seconds ago      Exited              gadget                                   7                   b6ebce45bc07e       gadget-fgwqw
	cbb48cc34a718       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   3939dabe190d7       gcp-auth-89d5ffd79-667gs
	c6efbffd9e76a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   b37b29ded59dd       csi-hostpathplugin-8rwcc
	f3813da08ca6d       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   b37b29ded59dd       csi-hostpathplugin-8rwcc
	46b0ca5a33d15       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   b37b29ded59dd       csi-hostpathplugin-8rwcc
	506770bb39f9e       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   b37b29ded59dd       csi-hostpathplugin-8rwcc
	80a4f3b68ba8c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   b37b29ded59dd       csi-hostpathplugin-8rwcc
	6bba863d0d58a       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   63875572ef35e       csi-hostpath-resizer-0
	71307b53c2495       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   b37b29ded59dd       csi-hostpathplugin-8rwcc
	ac645750e76d1       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   ee012486cc1ad       csi-hostpath-attacher-0
	0bebbe3c337ec       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       11 minutes ago      Running             local-path-provisioner                   0                   6ab673c782bb9       local-path-provisioner-86d989889c-txq8r
	270697098c5c3       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   4637fa1788a6f       snapshot-controller-56fcc65765-hmhfc
	9a9734592241d       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   95754a316afd1       snapshot-controller-56fcc65765-r9klc
	e0c967703a7e7       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        12 minutes ago      Running             yakd                                     0                   49d0ef52d254f       yakd-dashboard-67d98fc6b-frph4
	060c0aea88f4b       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        12 minutes ago      Running             metrics-server                           0                   2575ebec76d24       metrics-server-84c5f94fbc-5978t
	a1df9509562ec       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               12 minutes ago      Running             cloud-spanner-emulator                   0                   c7497e5da20a1       cloud-spanner-emulator-769b77f747-8b9pp
	0430627a7ce2e       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  12 minutes ago      Running             tiller                                   0                   e7806e1e38c9f       tiller-deploy-b48cc5f79-zkjcg
	bdb29b0bf3305       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     12 minutes ago      Running             nvidia-device-plugin-ctr                 0                   6d451126233f3       nvidia-device-plugin-daemonset-4nk8v
	5f001831caccd       6e38f40d628db                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   a974ab16bc24e       storage-provisioner
	924e7b4d6c7fd       c69fa2e9cbf5f                                                                                                                                12 minutes ago      Running             coredns                                  0                   eb2d28d3466c3       coredns-7c65d6cfc9-gncvl
	40542313af4f7       60c005f310ff3                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   fe9b24c701240       kube-proxy-68zv9
	d01bffc67e6ab       2e96e5913fc06                                                                                                                                12 minutes ago      Running             etcd                                     0                   e0ab03395a009       etcd-ubuntu-20-agent-11
	e84006747215d       6bab7719df100                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   892e9030e64b2       kube-apiserver-ubuntu-20-agent-11
	ac9b606f84fb8       175ffd71cce3d                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   0eca0cf437eb8       kube-controller-manager-ubuntu-20-agent-11
	18ae6142e24f7       9aa1fad941575                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   9027cfa99fb28       kube-scheduler-ubuntu-20-agent-11
	
	
	==> coredns [924e7b4d6c7f] <==
	[INFO] 10.244.0.9:49367 - 26909 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117136s
	[INFO] 10.244.0.9:50350 - 32869 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105113s
	[INFO] 10.244.0.9:50350 - 29306 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000141562s
	[INFO] 10.244.0.9:42018 - 5897 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000083553s
	[INFO] 10.244.0.9:42018 - 28676 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000098257s
	[INFO] 10.244.0.9:46859 - 2330 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000072498s
	[INFO] 10.244.0.9:46859 - 39430 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000087955s
	[INFO] 10.244.0.9:50752 - 33390 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000093037s
	[INFO] 10.244.0.9:50752 - 7009 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000131434s
	[INFO] 10.244.0.9:33693 - 12038 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000086407s
	[INFO] 10.244.0.9:33693 - 19972 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000142465s
	[INFO] 10.244.0.24:58792 - 33004 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00029975s
	[INFO] 10.244.0.24:39189 - 46737 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000334489s
	[INFO] 10.244.0.24:57723 - 7838 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131369s
	[INFO] 10.244.0.24:35507 - 6113 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000203268s
	[INFO] 10.244.0.24:44677 - 50508 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110856s
	[INFO] 10.244.0.24:36430 - 12946 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139715s
	[INFO] 10.244.0.24:44665 - 18835 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.00330506s
	[INFO] 10.244.0.24:48561 - 47661 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.0043654s
	[INFO] 10.244.0.24:56587 - 38108 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003154342s
	[INFO] 10.244.0.24:56219 - 34520 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003230032s
	[INFO] 10.244.0.24:52969 - 30862 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00365696s
	[INFO] 10.244.0.24:58110 - 41813 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005116214s
	[INFO] 10.244.0.24:44403 - 63405 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002295602s
	[INFO] 10.244.0.24:34540 - 24186 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.002898446s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-11
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-11
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T17_12_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-11
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-11"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 17:12:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-11
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 17:25:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 17:21:19 +0000   Mon, 16 Sep 2024 17:12:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 17:21:19 +0000   Mon, 16 Sep 2024 17:12:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 17:21:19 +0000   Mon, 16 Sep 2024 17:12:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 17:21:19 +0000   Mon, 16 Sep 2024 17:12:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.164.0.3
	  Hostname:    ubuntu-20-agent-11
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                abf08e96-3064-a5cf-751f-8bc1589a446e
	  Boot ID:                    b00746c9-fc8c-4cc0-a296-bc2b3421b8e2
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-769b77f747-8b9pp       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-fgwqw                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-667gs                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-gncvl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 csi-hostpath-attacher-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-8rwcc                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ubuntu-20-agent-11                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-ubuntu-20-agent-11             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-11    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-68zv9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ubuntu-20-agent-11             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-5978t               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-4nk8v          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-hmhfc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-r9klc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 tiller-deploy-b48cc5f79-zkjcg                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-txq8r       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-frph4                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node ubuntu-20-agent-11 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node ubuntu-20-agent-11 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node ubuntu-20-agent-11 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node ubuntu-20-agent-11 event: Registered Node ubuntu-20-agent-11 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 d0 c1 fe d1 d4 08 06
	[  +0.033321] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c4 88 62 c2 ec 08 06
	[  +1.930702] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 23 c6 ad c6 03 08 06
	[  +3.044183] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 26 10 4e 22 06 08 06
	[  +2.028075] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a a4 99 14 e5 e2 08 06
	[  +2.599424] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 03 45 c3 b9 87 08 06
	[  +4.685300] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 2b 3f da 01 e3 08 06
	[  +0.076898] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 6f 13 c4 9b 6f 08 06
	[  +0.086616] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 14 e7 d1 f5 60 08 06
	[Sep16 17:14] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e 8c ec 4f 12 0c 08 06
	[  +0.035221] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 00 63 f2 0b 4d 08 06
	[Sep16 17:15] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 15 b7 e1 2a 73 08 06
	[  +0.000491] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 12 f0 ee 6c fd b2 08 06
	
	
	==> etcd [d01bffc67e6a] <==
	{"level":"info","ts":"2024-09-16T17:12:35.129111Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T17:12:35.316313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5af7a70031193698 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T17:12:35.316368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5af7a70031193698 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T17:12:35.316427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5af7a70031193698 received MsgPreVoteResp from 5af7a70031193698 at term 1"}
	{"level":"info","ts":"2024-09-16T17:12:35.316446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5af7a70031193698 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T17:12:35.316459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5af7a70031193698 received MsgVoteResp from 5af7a70031193698 at term 2"}
	{"level":"info","ts":"2024-09-16T17:12:35.316470Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5af7a70031193698 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T17:12:35.316482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5af7a70031193698 elected leader 5af7a70031193698 at term 2"}
	{"level":"info","ts":"2024-09-16T17:12:35.317409Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T17:12:35.318010Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"5af7a70031193698","local-member-attributes":"{Name:ubuntu-20-agent-11 ClientURLs:[https://10.164.0.3:2379]}","request-path":"/0/members/5af7a70031193698/attributes","cluster-id":"93609bc2f3efe9a0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T17:12:35.318048Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T17:12:35.318041Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T17:12:35.318299Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T17:12:35.318326Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T17:12:35.318428Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"93609bc2f3efe9a0","local-member-id":"5af7a70031193698","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T17:12:35.318509Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T17:12:35.318537Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T17:12:35.319250Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T17:12:35.319305Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T17:12:35.320174Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.164.0.3:2379"}
	{"level":"info","ts":"2024-09-16T17:12:35.320344Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T17:12:57.990461Z","caller":"traceutil/trace.go:171","msg":"trace[2080646295] transaction","detail":"{read_only:false; response_revision:908; number_of_response:1; }","duration":"116.42941ms","start":"2024-09-16T17:12:57.874005Z","end":"2024-09-16T17:12:57.990434Z","steps":["trace[2080646295] 'process raft request'  (duration: 115.983038ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T17:22:35.470053Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1728}
	{"level":"info","ts":"2024-09-16T17:22:35.494555Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1728,"took":"23.908888ms","hash":1448796355,"current-db-size-bytes":8552448,"current-db-size":"8.6 MB","current-db-size-in-use-bytes":4460544,"current-db-size-in-use":"4.5 MB"}
	{"level":"info","ts":"2024-09-16T17:22:35.494596Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1448796355,"revision":1728,"compact-revision":-1}
	
	
	==> gcp-auth [cbb48cc34a71] <==
	2024/09/16 17:15:22 GCP Auth Webhook started!
	2024/09/16 17:15:38 Ready to marshal response ...
	2024/09/16 17:15:38 Ready to write response ...
	2024/09/16 17:15:38 Ready to marshal response ...
	2024/09/16 17:15:38 Ready to write response ...
	2024/09/16 17:16:02 Ready to marshal response ...
	2024/09/16 17:16:02 Ready to write response ...
	2024/09/16 17:16:02 Ready to marshal response ...
	2024/09/16 17:16:02 Ready to write response ...
	2024/09/16 17:16:02 Ready to marshal response ...
	2024/09/16 17:16:02 Ready to write response ...
	2024/09/16 17:24:14 Ready to marshal response ...
	2024/09/16 17:24:14 Ready to write response ...
	
	
	==> kernel <==
	 17:25:15 up  1:07,  0 users,  load average: 0.13, 0.36, 1.15
	Linux ubuntu-20-agent-11 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [e84006747215] <==
	W0916 17:13:57.979933       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.89.220:443: connect: connection refused
	W0916 17:14:14.696120       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.137.28:443: connect: connection refused
	E0916 17:14:14.696193       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.137.28:443: connect: connection refused" logger="UnhandledError"
	W0916 17:14:55.766114       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.137.28:443: connect: connection refused
	E0916 17:14:55.766155       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.137.28:443: connect: connection refused" logger="UnhandledError"
	W0916 17:14:55.789472       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.137.28:443: connect: connection refused
	E0916 17:14:55.789527       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.137.28:443: connect: connection refused" logger="UnhandledError"
	I0916 17:15:38.073949       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0916 17:15:38.090504       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0916 17:15:52.476355       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0916 17:15:52.499406       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0916 17:15:52.599254       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0916 17:15:52.628438       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0916 17:15:52.628483       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0916 17:15:52.721288       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0916 17:15:52.794183       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0916 17:15:52.829061       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0916 17:15:52.869739       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0916 17:15:53.537033       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0916 17:15:53.722333       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0916 17:15:53.762983       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0916 17:15:53.768863       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0916 17:15:53.825853       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0916 17:15:53.870292       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0916 17:15:54.054902       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [ac9b606f84fb] <==
	W0916 17:23:58.859401       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:23:58.859455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:23:59.666157       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:23:59.666198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:24:09.107054       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:24:09.107111       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:24:19.747923       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:24:19.747970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:24:28.525239       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:24:28.525293       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:24:31.007593       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:24:31.007638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:24:36.390131       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:24:36.390177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:24:40.422196       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:24:40.422246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:24:57.296566       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:24:57.296621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:25:07.384897       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:25:07.384947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:25:09.095218       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:25:09.095273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:25:10.335197       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:25:10.335241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 17:25:15.006242       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.355µs"
	
	
	==> kube-proxy [40542313af4f] <==
	I0916 17:12:44.472270       1 server_linux.go:66] "Using iptables proxy"
	I0916 17:12:44.606183       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.164.0.3"]
	E0916 17:12:44.606273       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 17:12:44.706880       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 17:12:44.706958       1 server_linux.go:169] "Using iptables Proxier"
	I0916 17:12:44.726828       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 17:12:44.727249       1 server.go:483] "Version info" version="v1.31.1"
	I0916 17:12:44.727279       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 17:12:44.739495       1 config.go:328] "Starting node config controller"
	I0916 17:12:44.739524       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 17:12:44.739913       1 config.go:199] "Starting service config controller"
	I0916 17:12:44.739923       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 17:12:44.739943       1 config.go:105] "Starting endpoint slice config controller"
	I0916 17:12:44.739948       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 17:12:44.839798       1 shared_informer.go:320] Caches are synced for node config
	I0916 17:12:44.841932       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 17:12:44.842000       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [18ae6142e24f] <==
	W0916 17:12:36.392929       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0916 17:12:36.392982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 17:12:36.392993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0916 17:12:36.393004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:12:36.392988       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 17:12:36.393048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:12:36.393063       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 17:12:36.393092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 17:12:36.393110       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 17:12:36.393133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:12:36.393137       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 17:12:36.393168       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 17:12:37.342827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 17:12:37.342902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 17:12:37.471980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 17:12:37.472033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 17:12:37.499663       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 17:12:37.499707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:12:37.517377       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 17:12:37.517417       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 17:12:37.579234       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 17:12:37.579288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 17:12:37.598102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 17:12:37.598151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 17:12:39.890675       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Fri 2024-07-05 11:17:12 UTC, end at Mon 2024-09-16 17:25:15 UTC. --
	Sep 16 17:25:10 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:10.803023  388623 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-q8dsq" secret="" err="secret \"gcp-auth\" not found"
	Sep 16 17:25:10 ubuntu-20-agent-11 kubelet[388623]: E0916 17:25:10.807217  388623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="0793f4ef-4fe5-48b0-a178-d24a30770697"
	Sep 16 17:25:12 ubuntu-20-agent-11 kubelet[388623]: E0916 17:25:12.805378  388623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="a275e48a-6393-4be6-b09b-d947f80c50e8"
	Sep 16 17:25:14 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:14.802729  388623 scope.go:117] "RemoveContainer" containerID="8e04e4ecb3a90d66c959a600965ed4a074cdc587a311feff3d3c8860527ed201"
	Sep 16 17:25:14 ubuntu-20-agent-11 kubelet[388623]: E0916 17:25:14.802938  388623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-fgwqw_gadget(2d003f98-f909-423c-9e14-c2f1c2d75f23)\"" pod="gadget/gadget-fgwqw" podUID="2d003f98-f909-423c-9e14-c2f1c2d75f23"
	Sep 16 17:25:14 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:14.947917  388623 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0793f4ef-4fe5-48b0-a178-d24a30770697-gcp-creds\") pod \"0793f4ef-4fe5-48b0-a178-d24a30770697\" (UID: \"0793f4ef-4fe5-48b0-a178-d24a30770697\") "
	Sep 16 17:25:14 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:14.947992  388623 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4n8h\" (UniqueName: \"kubernetes.io/projected/0793f4ef-4fe5-48b0-a178-d24a30770697-kube-api-access-x4n8h\") pod \"0793f4ef-4fe5-48b0-a178-d24a30770697\" (UID: \"0793f4ef-4fe5-48b0-a178-d24a30770697\") "
	Sep 16 17:25:14 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:14.948069  388623 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0793f4ef-4fe5-48b0-a178-d24a30770697-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "0793f4ef-4fe5-48b0-a178-d24a30770697" (UID: "0793f4ef-4fe5-48b0-a178-d24a30770697"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 17:25:14 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:14.948129  388623 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0793f4ef-4fe5-48b0-a178-d24a30770697-gcp-creds\") on node \"ubuntu-20-agent-11\" DevicePath \"\""
	Sep 16 17:25:14 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:14.950103  388623 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0793f4ef-4fe5-48b0-a178-d24a30770697-kube-api-access-x4n8h" (OuterVolumeSpecName: "kube-api-access-x4n8h") pod "0793f4ef-4fe5-48b0-a178-d24a30770697" (UID: "0793f4ef-4fe5-48b0-a178-d24a30770697"). InnerVolumeSpecName "kube-api-access-x4n8h". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 17:25:15 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:15.048808  388623 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x4n8h\" (UniqueName: \"kubernetes.io/projected/0793f4ef-4fe5-48b0-a178-d24a30770697-kube-api-access-x4n8h\") on node \"ubuntu-20-agent-11\" DevicePath \"\""
	Sep 16 17:25:15 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:15.351156  388623 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdczt\" (UniqueName: \"kubernetes.io/projected/5f1f5d00-6103-4a68-904b-02c56427ec65-kube-api-access-xdczt\") pod \"5f1f5d00-6103-4a68-904b-02c56427ec65\" (UID: \"5f1f5d00-6103-4a68-904b-02c56427ec65\") "
	Sep 16 17:25:15 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:15.351236  388623 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52w9m\" (UniqueName: \"kubernetes.io/projected/0679d7d4-7310-4135-aee2-0654bc4e3e00-kube-api-access-52w9m\") pod \"0679d7d4-7310-4135-aee2-0654bc4e3e00\" (UID: \"0679d7d4-7310-4135-aee2-0654bc4e3e00\") "
	Sep 16 17:25:15 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:15.353255  388623 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0679d7d4-7310-4135-aee2-0654bc4e3e00-kube-api-access-52w9m" (OuterVolumeSpecName: "kube-api-access-52w9m") pod "0679d7d4-7310-4135-aee2-0654bc4e3e00" (UID: "0679d7d4-7310-4135-aee2-0654bc4e3e00"). InnerVolumeSpecName "kube-api-access-52w9m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 17:25:15 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:15.353430  388623 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f1f5d00-6103-4a68-904b-02c56427ec65-kube-api-access-xdczt" (OuterVolumeSpecName: "kube-api-access-xdczt") pod "5f1f5d00-6103-4a68-904b-02c56427ec65" (UID: "5f1f5d00-6103-4a68-904b-02c56427ec65"). InnerVolumeSpecName "kube-api-access-xdczt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 17:25:15 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:15.452401  388623 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-52w9m\" (UniqueName: \"kubernetes.io/projected/0679d7d4-7310-4135-aee2-0654bc4e3e00-kube-api-access-52w9m\") on node \"ubuntu-20-agent-11\" DevicePath \"\""
	Sep 16 17:25:15 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:15.452449  388623 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xdczt\" (UniqueName: \"kubernetes.io/projected/5f1f5d00-6103-4a68-904b-02c56427ec65-kube-api-access-xdczt\") on node \"ubuntu-20-agent-11\" DevicePath \"\""
	Sep 16 17:25:15 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:15.473452  388623 scope.go:117] "RemoveContainer" containerID="b27a8d882b33e1a1bf228872151d7c518611608507b2eb487725b652dd760ab0"
	Sep 16 17:25:15 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:15.492732  388623 scope.go:117] "RemoveContainer" containerID="b27a8d882b33e1a1bf228872151d7c518611608507b2eb487725b652dd760ab0"
	Sep 16 17:25:15 ubuntu-20-agent-11 kubelet[388623]: E0916 17:25:15.494507  388623 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b27a8d882b33e1a1bf228872151d7c518611608507b2eb487725b652dd760ab0" containerID="b27a8d882b33e1a1bf228872151d7c518611608507b2eb487725b652dd760ab0"
	Sep 16 17:25:15 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:15.494559  388623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b27a8d882b33e1a1bf228872151d7c518611608507b2eb487725b652dd760ab0"} err="failed to get container status \"b27a8d882b33e1a1bf228872151d7c518611608507b2eb487725b652dd760ab0\": rpc error: code = Unknown desc = Error response from daemon: No such container: b27a8d882b33e1a1bf228872151d7c518611608507b2eb487725b652dd760ab0"
	Sep 16 17:25:15 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:15.494589  388623 scope.go:117] "RemoveContainer" containerID="1404182825d7458d2ed5c89f392a7adaf8dec5cd4e570e4ab63c9e39fb24c041"
	Sep 16 17:25:15 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:15.511894  388623 scope.go:117] "RemoveContainer" containerID="1404182825d7458d2ed5c89f392a7adaf8dec5cd4e570e4ab63c9e39fb24c041"
	Sep 16 17:25:15 ubuntu-20-agent-11 kubelet[388623]: E0916 17:25:15.512817  388623 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 1404182825d7458d2ed5c89f392a7adaf8dec5cd4e570e4ab63c9e39fb24c041" containerID="1404182825d7458d2ed5c89f392a7adaf8dec5cd4e570e4ab63c9e39fb24c041"
	Sep 16 17:25:15 ubuntu-20-agent-11 kubelet[388623]: I0916 17:25:15.512858  388623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"1404182825d7458d2ed5c89f392a7adaf8dec5cd4e570e4ab63c9e39fb24c041"} err="failed to get container status \"1404182825d7458d2ed5c89f392a7adaf8dec5cd4e570e4ab63c9e39fb24c041\": rpc error: code = Unknown desc = Error response from daemon: No such container: 1404182825d7458d2ed5c89f392a7adaf8dec5cd4e570e4ab63c9e39fb24c041"
	
	
	==> storage-provisioner [5f001831cacc] <==
	I0916 17:12:47.029342       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 17:12:47.040046       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 17:12:47.040105       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 17:12:47.058056       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 17:12:47.058255       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-11_a044bfde-00fd-4ae7-bcda-f74fd398cb67!
	I0916 17:12:47.059509       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e3dcf76-3337-4a8a-8d92-6943d8667482", APIVersion:"v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-11_a044bfde-00fd-4ae7-bcda-f74fd398cb67 became leader
	I0916 17:12:47.160284       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-11_a044bfde-00fd-4ae7-bcda-f74fd398cb67!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-11/10.164.0.3
	Start Time:       Mon, 16 Sep 2024 17:16:02 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7dbpk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7dbpk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-11
	  Normal   Pulling    7m50s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m50s (x4 over 9m13s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m50s (x4 over 9m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m35s (x6 over 9m12s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m6s (x21 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.83s)

                                                
                                    

Test pass (105/167)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.19
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 1.22
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.55
22 TestOffline 41.53
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 176.7
29 TestAddons/serial/Volcano 39.49
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 10.48
36 TestAddons/parallel/MetricsServer 5.38
37 TestAddons/parallel/HelmTiller 10.09
39 TestAddons/parallel/CSI 37.96
40 TestAddons/parallel/Headlamp 20.93
41 TestAddons/parallel/CloudSpanner 5.28
43 TestAddons/parallel/NvidiaDevicePlugin 6.23
44 TestAddons/parallel/Yakd 10.41
45 TestAddons/StoppedEnableDisable 10.69
47 TestCertExpiration 227.47
58 TestFunctional/serial/CopySyncFile 0
59 TestFunctional/serial/StartWithProxy 30.38
60 TestFunctional/serial/AuditLog 0
61 TestFunctional/serial/SoftStart 27.11
62 TestFunctional/serial/KubeContext 0.05
63 TestFunctional/serial/KubectlGetPods 0.07
65 TestFunctional/serial/MinikubeKubectlCmd 0.12
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
67 TestFunctional/serial/ExtraConfig 37.22
68 TestFunctional/serial/ComponentHealth 0.07
69 TestFunctional/serial/LogsCmd 0.82
70 TestFunctional/serial/LogsFileCmd 0.86
71 TestFunctional/serial/InvalidService 3.93
73 TestFunctional/parallel/ConfigCmd 0.27
74 TestFunctional/parallel/DashboardCmd 6.63
75 TestFunctional/parallel/DryRun 0.18
76 TestFunctional/parallel/InternationalLanguage 0.1
77 TestFunctional/parallel/StatusCmd 0.47
80 TestFunctional/parallel/ProfileCmd/profile_not_create 0.24
81 TestFunctional/parallel/ProfileCmd/profile_list 0.23
82 TestFunctional/parallel/ProfileCmd/profile_json_output 0.23
84 TestFunctional/parallel/ServiceCmd/DeployApp 9.15
85 TestFunctional/parallel/ServiceCmd/List 0.34
86 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
87 TestFunctional/parallel/ServiceCmd/HTTPS 0.16
88 TestFunctional/parallel/ServiceCmd/Format 0.16
89 TestFunctional/parallel/ServiceCmd/URL 0.16
90 TestFunctional/parallel/ServiceCmdConnect 8.31
91 TestFunctional/parallel/AddonsCmd 0.11
92 TestFunctional/parallel/PersistentVolumeClaim 21.57
105 TestFunctional/parallel/MySQL 20.91
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 13.74
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 12.86
114 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/Version/short 0.05
119 TestFunctional/parallel/Version/components 0.38
120 TestFunctional/parallel/License 0.63
121 TestFunctional/delete_echo-server_images 0.03
122 TestFunctional/delete_my-image_image 0.02
123 TestFunctional/delete_minikube_cached_images 0.02
128 TestImageBuild/serial/Setup 14.86
129 TestImageBuild/serial/NormalBuild 3.41
130 TestImageBuild/serial/BuildWithBuildArg 0.98
131 TestImageBuild/serial/BuildWithDockerIgnore 0.67
132 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.65
136 TestJSONOutput/start/Command 27.74
137 TestJSONOutput/start/Audit 0
139 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
140 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
142 TestJSONOutput/pause/Command 0.5
143 TestJSONOutput/pause/Audit 0
145 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/unpause/Command 0.44
149 TestJSONOutput/unpause/Audit 0
151 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/stop/Command 10.42
155 TestJSONOutput/stop/Audit 0
157 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
159 TestErrorJSONOutput 0.2
164 TestMainNoArgs 0.05
165 TestMinikubeProfile 34.31
173 TestPause/serial/Start 27.19
174 TestPause/serial/SecondStartNoReconfiguration 28.97
175 TestPause/serial/Pause 0.5
176 TestPause/serial/VerifyStatus 0.13
177 TestPause/serial/Unpause 0.38
178 TestPause/serial/PauseAgain 0.53
179 TestPause/serial/DeletePaused 1.68
180 TestPause/serial/VerifyDeletedResources 0.07
194 TestRunningBinaryUpgrade 72.6
196 TestStoppedBinaryUpgrade/Setup 2.68
197 TestStoppedBinaryUpgrade/Upgrade 51.1
198 TestStoppedBinaryUpgrade/MinikubeLogs 0.83
199 TestKubernetesUpgrade 304.77
x
+
TestDownloadOnly/v1.20.0/json-events (19.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (19.186956162s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (19.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (60.221102ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 17:11:22
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 17:11:22.682171  383540 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:11:22.682301  383540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:11:22.682311  383540 out.go:358] Setting ErrFile to fd 2...
	I0916 17:11:22.682317  383540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:11:22.682512  383540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-376650/.minikube/bin
	W0916 17:11:22.682672  383540 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19649-376650/.minikube/config/config.json: open /home/jenkins/minikube-integration/19649-376650/.minikube/config/config.json: no such file or directory
	I0916 17:11:22.683351  383540 out.go:352] Setting JSON to true
	I0916 17:11:22.684357  383540 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3226,"bootTime":1726503457,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:11:22.684469  383540 start.go:139] virtualization: kvm guest
	I0916 17:11:22.686993  383540 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 17:11:22.687140  383540 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19649-376650/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 17:11:22.687220  383540 notify.go:220] Checking for updates...
	I0916 17:11:22.688483  383540 out.go:169] MINIKUBE_LOCATION=19649
	I0916 17:11:22.689691  383540 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:11:22.691056  383540 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19649-376650/kubeconfig
	I0916 17:11:22.692252  383540 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-376650/.minikube
	I0916 17:11:22.693429  383540 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 17:11:22.695526  383540 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 17:11:22.695749  383540 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:11:22.707319  383540 out.go:97] Using the none driver based on user configuration
	I0916 17:11:22.707368  383540 start.go:297] selected driver: none
	I0916 17:11:22.707376  383540 start.go:901] validating driver "none" against <nil>
	I0916 17:11:22.707402  383540 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	I0916 17:11:22.707766  383540 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 17:11:22.708295  383540 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0916 17:11:22.708434  383540 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 17:11:22.708466  383540 cni.go:84] Creating CNI manager for ""
	I0916 17:11:22.708514  383540 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 17:11:22.708569  383540 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:6000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0916 17:11:22.709931  383540 out.go:97] Starting "minikube" primary control-plane node in "minikube" cluster
	I0916 17:11:22.710231  383540 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/config.json ...
	I0916 17:11:22.710261  383540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-376650/.minikube/profiles/minikube/config.json: {Name:mk101ec0f5a26760b7b6ea3f4f1fd448438fbc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:11:22.710410  383540 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 17:11:22.710699  383540 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19649-376650/.minikube/cache/linux/amd64/v1.20.0/kubectl
	I0916 17:11:22.710699  383540 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19649-376650/.minikube/cache/linux/amd64/v1.20.0/kubeadm
	I0916 17:11:22.710745  383540 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19649-376650/.minikube/cache/linux/amd64/v1.20.0/kubelet
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (1.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.218255324s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (1.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (58.679099ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC | 16 Sep 24 17:11 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC | 16 Sep 24 17:11 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 16 Sep 24 17:11 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 17:11:42
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 17:11:42.183856  383695 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:11:42.184139  383695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:11:42.184149  383695 out.go:358] Setting ErrFile to fd 2...
	I0916 17:11:42.184153  383695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:11:42.184360  383695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-376650/.minikube/bin
	I0916 17:11:42.184933  383695 out.go:352] Setting JSON to true
	I0916 17:11:42.185866  383695 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3245,"bootTime":1726503457,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:11:42.185959  383695 start.go:139] virtualization: kvm guest
	I0916 17:11:42.188099  383695 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 17:11:42.188218  383695 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19649-376650/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 17:11:42.188316  383695 notify.go:220] Checking for updates...
	I0916 17:11:42.189882  383695 out.go:169] MINIKUBE_LOCATION=19649
	I0916 17:11:42.191440  383695 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:11:42.192901  383695 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19649-376650/kubeconfig
	I0916 17:11:42.194358  383695 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-376650/.minikube
	I0916 17:11:42.195669  383695 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:35125 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (41.53s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (39.890368788s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.638473359s)
--- PASS: TestOffline (41.53s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (48.241441ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (50.189708ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (176.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller: (2m56.696482927s)
--- PASS: TestAddons/Setup (176.70s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.49s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 10.072536ms
addons_test.go:905: volcano-admission stabilized in 10.248944ms
addons_test.go:897: volcano-scheduler stabilized in 10.311179ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-5cb5l" [627c8245-0149-4329-a35e-d8dffb3573e2] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004382136s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-tn8v8" [2087a140-5d74-4264-a1b6-2a248e4687a9] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003280906s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-4scft" [fb6a0ce6-0535-4c44-834b-c36c8d9d4b22] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003590936s
addons_test.go:932: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a88285fa-63fa-44ac-af5a-4c8c16847783] Pending
helpers_test.go:344: "test-job-nginx-0" [a88285fa-63fa-44ac-af5a-4c8c16847783] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a88285fa-63fa-44ac-af5a-4c8c16847783] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003456812s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.141991382s)
--- PASS: TestAddons/serial/Volcano (39.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.48s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fgwqw" [2d003f98-f909-423c-9e14-c2f1c2d75f23] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004792815s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.474966122s)
--- PASS: TestAddons/parallel/InspektorGadget (10.48s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.117893ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-5978t" [d701b5fa-ed17-4ee7-a966-09b2697eb07c] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004053975s
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.38s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.09s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.932621ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-zkjcg" [0caef804-3403-4812-a38d-a32b22873751] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003662681s
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.795653217s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.09s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.870645ms
addons_test.go:570: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4a238262-bff2-4e67-abae-ab0e2e17bf01] Pending
helpers_test.go:344: "task-pv-pod" [4a238262-bff2-4e67-abae-ab0e2e17bf01] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4a238262-bff2-4e67-abae-ab0e2e17bf01] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003832064s
addons_test.go:590: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context minikube delete pod task-pv-pod: (1.30483338s)
addons_test.go:606: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [861b802e-cbf5-4099-85db-1a51e7954445] Pending
helpers_test.go:344: "task-pv-pod-restore" [861b802e-cbf5-4099-85db-1a51e7954445] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [861b802e-cbf5-4099-85db-1a51e7954445] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004235889s
addons_test.go:632: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.288806823s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (37.96s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-9db2n" [3b2fe4d0-e931-427e-839b-b8506c933e41] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-9db2n" [3b2fe4d0-e931-427e-839b-b8506c933e41] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.003824843s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.413381562s)
--- PASS: TestAddons/parallel/Headlamp (20.93s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-8b9pp" [c741070c-8559-418b-be23-ea5b0819f31f] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003535684s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4nk8v" [7670ea2f-0151-40da-99a4-799802b5cbf8] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003956039s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.23s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-frph4" [4b4b7a0d-0e73-4883-a817-d67fc97b8753] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004169424s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.404597875s)
--- PASS: TestAddons/parallel/Yakd (10.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.69s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.399554162s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.69s)

                                                
                                    
x
+
TestCertExpiration (227.47s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.396004906s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (31.330515957s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.739684309s)
--- PASS: TestCertExpiration (227.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19649-376650/.minikube/files/etc/test/nested/copy/383528/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (30.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (30.382168454s)
--- PASS: TestFunctional/serial/StartWithProxy (30.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (27.112129228s)
functional_test.go:663: soft start took 27.112844466s for "minikube" cluster.
--- PASS: TestFunctional/serial/SoftStart (27.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.218799872s)
functional_test.go:761: restart took 37.218977451s for "minikube" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.22s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.82s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd2014124174/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.86s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.93s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (165.166955ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL           |
	|-----------|-------------|-------------|-------------------------|
	| default   | invalid-svc |          80 | http://10.164.0.3:31058 |
	|-----------|-------------|-------------|-------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (43.061747ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (44.10111ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/16 17:32:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 419470: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.63s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (92.915365ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-376650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-376650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 17:32:50.132842  419786 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:32:50.132956  419786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:32:50.132968  419786 out.go:358] Setting ErrFile to fd 2...
	I0916 17:32:50.132974  419786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:32:50.133176  419786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-376650/.minikube/bin
	I0916 17:32:50.133701  419786 out.go:352] Setting JSON to false
	I0916 17:32:50.134860  419786 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4513,"bootTime":1726503457,"procs":278,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:32:50.134998  419786 start.go:139] virtualization: kvm guest
	I0916 17:32:50.137283  419786 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 17:32:50.138900  419786 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19649-376650/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 17:32:50.138973  419786 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 17:32:50.138998  419786 notify.go:220] Checking for updates...
	I0916 17:32:50.141788  419786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:32:50.143180  419786 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-376650/kubeconfig
	I0916 17:32:50.144654  419786 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-376650/.minikube
	I0916 17:32:50.146038  419786 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 17:32:50.147373  419786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 17:32:50.148959  419786 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:32:50.149326  419786 exec_runner.go:51] Run: systemctl --version
	I0916 17:32:50.152511  419786 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:32:50.166513  419786 out.go:177] * Using the none driver based on existing profile
	I0916 17:32:50.167805  419786 start.go:297] selected driver: none
	I0916 17:32:50.167824  419786 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.164.0.3 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:32:50.167985  419786 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 17:32:50.168030  419786 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 17:32:50.168380  419786 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0916 17:32:50.170881  419786 out.go:201] 
	W0916 17:32:50.172154  419786 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 17:32:50.173314  419786 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (101.059382ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-376650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-376650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 17:32:50.326186  419818 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:32:50.326340  419818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:32:50.326353  419818 out.go:358] Setting ErrFile to fd 2...
	I0916 17:32:50.326357  419818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:32:50.326721  419818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-376650/.minikube/bin
	I0916 17:32:50.327364  419818 out.go:352] Setting JSON to false
	I0916 17:32:50.328614  419818 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4513,"bootTime":1726503457,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:32:50.328754  419818 start.go:139] virtualization: kvm guest
	I0916 17:32:50.330876  419818 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0916 17:32:50.332416  419818 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19649-376650/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 17:32:50.332446  419818 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 17:32:50.332506  419818 notify.go:220] Checking for updates...
	I0916 17:32:50.335080  419818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:32:50.336371  419818 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-376650/kubeconfig
	I0916 17:32:50.337887  419818 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-376650/.minikube
	I0916 17:32:50.339245  419818 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 17:32:50.340614  419818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 17:32:50.342689  419818 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:32:50.343210  419818 exec_runner.go:51] Run: systemctl --version
	I0916 17:32:50.346117  419818 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:32:50.358635  419818 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0916 17:32:50.359921  419818 start.go:297] selected driver: none
	I0916 17:32:50.359940  419818 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.164.0.3 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:32:50.360084  419818 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 17:32:50.360112  419818 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0916 17:32:50.360524  419818 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0916 17:32:50.363139  419818 out.go:201] 
	W0916 17:32:50.364672  419818 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 17:32:50.365963  419818 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "180.543528ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "50.767457ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "180.895674ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.710637ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-qfkk6" [02089f91-a025-4798-8c75-588a0631d279] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-qfkk6" [02089f91-a025-4798-8c75-588a0631d279] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003738627s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "333.785314ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.164.0.3:30805
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.164.0.3:30805
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-mzmjc" [9688358c-b0d3-490f-b52a-bb67b328be84] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-mzmjc" [9688358c-b0d3-490f-b52a-bb67b328be84] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004217352s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.164.0.3:31918
functional_test.go:1675: http://10.164.0.3:31918: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-mzmjc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.164.0.3:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.164.0.3:31918
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.31s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f066139e-8a25-4965-be4d-43ee5a36c0c1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003648182s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2cc7e33b-b2bc-4167-ad8e-cbefa5326454] Pending
helpers_test.go:344: "sp-pod" [2cc7e33b-b2bc-4167-ad8e-cbefa5326454] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2cc7e33b-b2bc-4167-ad8e-cbefa5326454] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003113854s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d2157f86-fea2-4f45-b38c-23614d4de02c] Pending
helpers_test.go:344: "sp-pod" [d2157f86-fea2-4f45-b38c-23614d4de02c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d2157f86-fea2-4f45-b38c-23614d4de02c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003579794s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.57s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-bk8x7" [7507313c-fb73-43a8-b712-b35adf7bb8ad] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-bk8x7" [7507313c-fb73-43a8-b712-b35adf7bb8ad] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.003769316s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-bk8x7 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-bk8x7 -- mysql -ppassword -e "show databases;": exit status 1 (115.043652ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-bk8x7 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-bk8x7 -- mysql -ppassword -e "show databases;": exit status 1 (112.112827ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-bk8x7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.91s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.734954704s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (12.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (12.861679974s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (12.86s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.860169263s)
--- PASS: TestImageBuild/serial/Setup (14.86s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (3.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (3.413784069s)
--- PASS: TestImageBuild/serial/NormalBuild (3.41s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (27.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (27.738673043s)
--- PASS: TestJSONOutput/start/Command (27.74s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.42s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.422846322s)
--- PASS: TestJSONOutput/stop/Command (10.42s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.73503ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"47b8a276-171c-43a2-af43-4619f8f29716","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8e6e4569-71c0-4895-b737-4a990b5ea0c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19649"}}
	{"specversion":"1.0","id":"e312613c-9525-4b60-9ed9-a7ac6ccad299","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0ced4cf6-3856-401b-aa53-bb26afdc4a6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19649-376650/kubeconfig"}}
	{"specversion":"1.0","id":"dc19f2f5-1953-4186-a7cb-e3f8346eae2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-376650/.minikube"}}
	{"specversion":"1.0","id":"dd82f922-9a7a-4b18-b98d-628949d964fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a6b23a42-a5a8-484f-b88b-a112dddc07d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bd665997-764f-4b81-9f88-4a36e19ed2a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (34.31s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.977729335s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (18.501979453s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.196967118s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (34.31s)

                                                
                                    
x
+
TestPause/serial/Start (27.19s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (27.189159632s)
--- PASS: TestPause/serial/Start (27.19s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (28.974415832s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.97s)

                                                
                                    
x
+
TestPause/serial/Pause (0.5s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.50s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (133.204884ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.38s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.38s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.53s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.53s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.68s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.682529636s)
--- PASS: TestPause/serial/DeletePaused (1.68s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (72.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.449031037 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.449031037 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (31.702805392s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (35.114707919s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.106257532s)
--- PASS: TestRunningBinaryUpgrade (72.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (51.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1733026236 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1733026236 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.912694699s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1733026236 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1733026236 -p minikube stop: (23.749252401s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.43280602s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (51.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                    
x
+
TestKubernetesUpgrade (304.77s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (27.847062172s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.303024231s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (76.704877ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m15.986087946s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (67.143419ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-376650/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-376650/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.160286927s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.274741671s)
--- PASS: TestKubernetesUpgrade (304.77s)

                                                
                                    

Test skip (61/167)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
38 TestAddons/parallel/Olm 0
42 TestAddons/parallel/LocalPath 0
46 TestCertOptions 0
48 TestDockerFlags 0
49 TestForceSystemdFlag 0
50 TestForceSystemdEnv 0
51 TestDockerEnvContainerd 0
52 TestKVMDriverInstallOrUpdate 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
55 TestErrorSpam 0
64 TestFunctional/serial/CacheCmd 0
78 TestFunctional/parallel/MountCmd 0
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
97 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
98 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
99 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
100 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
101 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
102 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
103 TestFunctional/parallel/SSHCmd 0
104 TestFunctional/parallel/CpCmd 0
106 TestFunctional/parallel/FileSync 0
107 TestFunctional/parallel/CertSync 0
112 TestFunctional/parallel/DockerEnv 0
113 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/ImageCommands 0
116 TestFunctional/parallel/NonActiveRuntimeDisabled 0
124 TestGvisorAddon 0
125 TestMultiControlPlane 0
133 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
160 TestKicCustomNetwork 0
161 TestKicExistingNetwork 0
162 TestKicCustomSubnet 0
163 TestKicStaticIP 0
166 TestMountStart 0
167 TestMultiNode 0
168 TestNetworkPlugins 0
169 TestNoKubernetes 0
170 TestChangeNoneUser 0
181 TestPreload 0
182 TestScheduledStopWindows 0
183 TestScheduledStopUnix 0
184 TestSkaffold 0
187 TestStartStop/group/old-k8s-version 0.13
188 TestStartStop/group/newest-cni 0.13
189 TestStartStop/group/default-k8s-diff-port 0.13
190 TestStartStop/group/no-preload 0.13
191 TestStartStop/group/disable-driver-mounts 0.14
192 TestStartStop/group/embed-certs 0.13
193 TestInsufficientStorage 0
200 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:198: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:978: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.13s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard