Test Report: none_Linux 19689

                    
                      af422e057ba227eec8656c67d09f56de251f325e:2024-09-23:36336
                    
                

Test fail (1/166)

Order failed test Duration
33 TestAddons/parallel/Registry 72.01
x
+
TestAddons/parallel/Registry (72.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.885793ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-8hvdw" [678aa223-edb6-4a6c-b3e5-5d95e0ea40f6] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003824315s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4nzb4" [35894a53-f7e8-4743-9eea-200f3986fcd6] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003292519s
addons_test.go:338: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.084822959s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/23 10:32:54 [DEBUG] GET http://10.150.0.16:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:44303               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:21 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:23 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:23 UTC | 23 Sep 24 10:23 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:21:20
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:21:20.820039   14503 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:21:20.820260   14503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:20.820273   14503 out.go:358] Setting ErrFile to fd 2...
	I0923 10:21:20.820279   14503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:20.820494   14503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3689/.minikube/bin
	I0923 10:21:20.821111   14503 out.go:352] Setting JSON to false
	I0923 10:21:20.821946   14503 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":228,"bootTime":1727086653,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:21:20.822041   14503 start.go:139] virtualization: kvm guest
	I0923 10:21:20.824455   14503 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 10:21:20.826064   14503 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19689-3689/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:21:20.826081   14503 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:21:20.826099   14503 notify.go:220] Checking for updates...
	I0923 10:21:20.828775   14503 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:21:20.830102   14503 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3689/kubeconfig
	I0923 10:21:20.831449   14503 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3689/.minikube
	I0923 10:21:20.832760   14503 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:21:20.834126   14503 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:21:20.835492   14503 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:21:20.847200   14503 out.go:177] * Using the none driver based on user configuration
	I0923 10:21:20.848385   14503 start.go:297] selected driver: none
	I0923 10:21:20.848410   14503 start.go:901] validating driver "none" against <nil>
	I0923 10:21:20.848424   14503 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:21:20.848473   14503 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0923 10:21:20.848761   14503 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0923 10:21:20.849338   14503 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:21:20.849554   14503 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:21:20.849577   14503 cni.go:84] Creating CNI manager for ""
	I0923 10:21:20.849623   14503 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:21:20.849632   14503 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 10:21:20.849676   14503 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:20.851103   14503 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0923 10:21:20.852575   14503 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/config.json ...
	I0923 10:21:20.852610   14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/config.json: {Name:mk91c6775a53b295bfcd832a0223bb0435d503a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:20.852737   14503 start.go:360] acquireMachinesLock for minikube: {Name:mk967f578fd3b876cb945ce54e006da4ee685f93 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 10:21:20.852767   14503 start.go:364] duration metric: took 16.983µs to acquireMachinesLock for "minikube"
	I0923 10:21:20.852785   14503 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:21:20.852838   14503 start.go:125] createHost starting for "" (driver="none")
	I0923 10:21:20.854284   14503 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0923 10:21:20.855416   14503 exec_runner.go:51] Run: systemctl --version
	I0923 10:21:20.858054   14503 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0923 10:21:20.858086   14503 client.go:168] LocalClient.Create starting
	I0923 10:21:20.858169   14503 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3689/.minikube/certs/ca.pem
	I0923 10:21:20.858201   14503 main.go:141] libmachine: Decoding PEM data...
	I0923 10:21:20.858217   14503 main.go:141] libmachine: Parsing certificate...
	I0923 10:21:20.858272   14503 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3689/.minikube/certs/cert.pem
	I0923 10:21:20.858291   14503 main.go:141] libmachine: Decoding PEM data...
	I0923 10:21:20.858304   14503 main.go:141] libmachine: Parsing certificate...
	I0923 10:21:20.858586   14503 client.go:171] duration metric: took 493.569µs to LocalClient.Create
	I0923 10:21:20.858608   14503 start.go:167] duration metric: took 556.143µs to libmachine.API.Create "minikube"
	I0923 10:21:20.858613   14503 start.go:293] postStartSetup for "minikube" (driver="none")
	I0923 10:21:20.858654   14503 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:21:20.858698   14503 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:21:20.866594   14503 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 10:21:20.866613   14503 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 10:21:20.866622   14503 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 10:21:20.868535   14503 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0923 10:21:20.869869   14503 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3689/.minikube/addons for local assets ...
	I0923 10:21:20.869932   14503 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3689/.minikube/files for local assets ...
	I0923 10:21:20.869953   14503 start.go:296] duration metric: took 11.335604ms for postStartSetup
	I0923 10:21:20.870612   14503 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/config.json ...
	I0923 10:21:20.870745   14503 start.go:128] duration metric: took 17.890139ms to createHost
	I0923 10:21:20.870758   14503 start.go:83] releasing machines lock for "minikube", held for 17.976336ms
	I0923 10:21:20.871120   14503 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 10:21:20.871243   14503 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0923 10:21:20.873040   14503 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 10:21:20.873098   14503 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:21:20.883725   14503 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 10:21:20.883754   14503 start.go:495] detecting cgroup driver to use...
	I0923 10:21:20.883783   14503 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:21:20.883898   14503 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:21:20.904572   14503 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 10:21:20.914147   14503 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 10:21:20.925632   14503 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 10:21:20.925689   14503 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 10:21:20.936554   14503 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:21:20.948083   14503 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 10:21:20.958960   14503 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:21:20.969337   14503 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:21:20.977937   14503 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 10:21:20.986784   14503 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 10:21:20.996118   14503 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 10:21:21.005327   14503 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:21:21.013802   14503 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:21:21.022053   14503 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:21:21.259417   14503 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0923 10:21:21.323133   14503 start.go:495] detecting cgroup driver to use...
	I0923 10:21:21.323178   14503 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:21:21.323321   14503 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:21:21.342873   14503 exec_runner.go:51] Run: which cri-dockerd
	I0923 10:21:21.343758   14503 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 10:21:21.353300   14503 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0923 10:21:21.353328   14503 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 10:21:21.353362   14503 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 10:21:21.361144   14503 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 10:21:21.361325   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1441809314 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 10:21:21.370443   14503 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0923 10:21:21.594413   14503 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0923 10:21:21.822571   14503 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 10:21:21.822734   14503 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0923 10:21:21.822748   14503 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0923 10:21:21.822786   14503 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0923 10:21:21.831865   14503 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0923 10:21:21.831999   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2141491122 /etc/docker/daemon.json
	I0923 10:21:21.841304   14503 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:21:22.077085   14503 exec_runner.go:51] Run: sudo systemctl restart docker
	I0923 10:21:22.374121   14503 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 10:21:22.385617   14503 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0923 10:21:22.402176   14503 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:21:22.415426   14503 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0923 10:21:22.661879   14503 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0923 10:21:22.883350   14503 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:21:23.113204   14503 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0923 10:21:23.126919   14503 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:21:23.137701   14503 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:21:23.364075   14503 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0923 10:21:23.435451   14503 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 10:21:23.435548   14503 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0923 10:21:23.437235   14503 start.go:563] Will wait 60s for crictl version
	I0923 10:21:23.437279   14503 exec_runner.go:51] Run: which crictl
	I0923 10:21:23.438148   14503 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0923 10:21:23.469977   14503 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0923 10:21:23.470044   14503 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 10:21:23.491325   14503 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 10:21:23.514973   14503 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0923 10:21:23.515052   14503 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0923 10:21:23.518190   14503 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0923 10:21:23.519499   14503 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.150.0.16 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:21:23.519608   14503 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:21:23.519619   14503 kubeadm.go:934] updating node { 10.150.0.16 8443 v1.31.1 docker true true} ...
	I0923 10:21:23.519710   14503 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-14 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.150.0.16 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0923 10:21:23.519755   14503 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0923 10:21:23.568548   14503 cni.go:84] Creating CNI manager for ""
	I0923 10:21:23.568576   14503 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:21:23.568586   14503 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:21:23.568606   14503 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.150.0.16 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-14 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.150.0.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.150.0.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:21:23.568743   14503 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.150.0.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-14"
	  kubeletExtraArgs:
	    node-ip: 10.150.0.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.150.0.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:21:23.568799   14503 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:21:23.577944   14503 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 10:21:23.578005   14503 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 10:21:23.585875   14503 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 10:21:23.585886   14503 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 10:21:23.585899   14503 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 10:21:23.585923   14503 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:21:23.585962   14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 10:21:23.585968   14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 10:21:23.598069   14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 10:21:23.636445   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube921111766 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 10:21:23.646655   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2618589158 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 10:21:23.671937   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2563588793 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 10:21:23.738901   14503 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:21:23.748287   14503 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0923 10:21:23.748312   14503 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 10:21:23.748357   14503 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 10:21:23.756247   14503 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0923 10:21:23.756397   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4278880996 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 10:21:23.764888   14503 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0923 10:21:23.764912   14503 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0923 10:21:23.764953   14503 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0923 10:21:23.772935   14503 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:21:23.773098   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1854200395 /lib/systemd/system/kubelet.service
	I0923 10:21:23.781390   14503 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0923 10:21:23.781522   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1075798674 /var/tmp/minikube/kubeadm.yaml.new
	I0923 10:21:23.790285   14503 exec_runner.go:51] Run: grep 10.150.0.16	control-plane.minikube.internal$ /etc/hosts
	I0923 10:21:23.791817   14503 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:21:24.018441   14503 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0923 10:21:24.034684   14503 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube for IP: 10.150.0.16
	I0923 10:21:24.034707   14503 certs.go:194] generating shared ca certs ...
	I0923 10:21:24.034729   14503 certs.go:226] acquiring lock for ca certs: {Name:mk10a034bcc1c0616fe44cc8e593fe0ec22b8be2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:24.034884   14503 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3689/.minikube/ca.key
	I0923 10:21:24.034947   14503 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3689/.minikube/proxy-client-ca.key
	I0923 10:21:24.034961   14503 certs.go:256] generating profile certs ...
	I0923 10:21:24.035037   14503 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/client.key
	I0923 10:21:24.035056   14503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/client.crt with IP's: []
	I0923 10:21:24.180531   14503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/client.crt ...
	I0923 10:21:24.180565   14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/client.crt: {Name:mk5288fe1432e0a766b450b6d8afe83611266a2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:24.180704   14503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/client.key ...
	I0923 10:21:24.180747   14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/client.key: {Name:mkde2adf88a49c1bb64334f50498662801f53efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:24.180821   14503 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.key.d7fe11b0
	I0923 10:21:24.180835   14503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.crt.d7fe11b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.150.0.16]
	I0923 10:21:24.232375   14503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.crt.d7fe11b0 ...
	I0923 10:21:24.232403   14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.crt.d7fe11b0: {Name:mk4a77e46b001160b446596215b33355b2c74ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:24.232522   14503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.key.d7fe11b0 ...
	I0923 10:21:24.232532   14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.key.d7fe11b0: {Name:mkf3b2b285a9e42cf9282d7aa9018345ee355df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:24.232587   14503 certs.go:381] copying /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.crt.d7fe11b0 -> /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.crt
	I0923 10:21:24.232670   14503 certs.go:385] copying /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.key.d7fe11b0 -> /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.key
	I0923 10:21:24.232725   14503 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.key
	I0923 10:21:24.232736   14503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0923 10:21:24.286918   14503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.crt ...
	I0923 10:21:24.286947   14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.crt: {Name:mk0cacb6cbb991de10b5cbab21f78b060a583593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:24.287068   14503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.key ...
	I0923 10:21:24.287078   14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.key: {Name:mke22c67c96fa7a6327fc541375f15884f31ba42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:24.287246   14503 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3689/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 10:21:24.287282   14503 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3689/.minikube/certs/ca.pem (1078 bytes)
	I0923 10:21:24.287305   14503 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3689/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:21:24.287330   14503 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3689/.minikube/certs/key.pem (1679 bytes)
	I0923 10:21:24.287871   14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:21:24.288006   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2089542381 /var/lib/minikube/certs/ca.crt
	I0923 10:21:24.296864   14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:21:24.296998   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2244881567 /var/lib/minikube/certs/ca.key
	I0923 10:21:24.306200   14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:21:24.306327   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2412715834 /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 10:21:24.314609   14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 10:21:24.314778   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2217015731 /var/lib/minikube/certs/proxy-client-ca.key
	I0923 10:21:24.323830   14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0923 10:21:24.323980   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3511140915 /var/lib/minikube/certs/apiserver.crt
	I0923 10:21:24.333484   14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 10:21:24.333630   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube297324337 /var/lib/minikube/certs/apiserver.key
	I0923 10:21:24.342308   14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:21:24.342476   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3067266897 /var/lib/minikube/certs/proxy-client.crt
	I0923 10:21:24.351287   14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:21:24.351421   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4278780158 /var/lib/minikube/certs/proxy-client.key
	I0923 10:21:24.360613   14503 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0923 10:21:24.360636   14503 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:21:24.360669   14503 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:21:24.368423   14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:21:24.368575   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3569917701 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:21:24.377349   14503 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:21:24.377541   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1271971900 /var/lib/minikube/kubeconfig
	I0923 10:21:24.385873   14503 exec_runner.go:51] Run: openssl version
	I0923 10:21:24.388759   14503 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:21:24.398659   14503 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:21:24.399990   14503 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 23 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:21:24.400036   14503 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:21:24.402954   14503 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:21:24.411416   14503 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:21:24.412601   14503 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:21:24.412641   14503 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.150.0.16 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:24.412754   14503 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 10:21:24.428157   14503 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:21:24.437463   14503 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:21:24.446461   14503 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 10:21:24.467863   14503 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:21:24.476588   14503 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:21:24.476610   14503 kubeadm.go:157] found existing configuration files:
	
	I0923 10:21:24.476651   14503 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:21:24.484402   14503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:21:24.484458   14503 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:21:24.491914   14503 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:21:24.499754   14503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:21:24.499819   14503 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:21:24.507842   14503 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:21:24.516201   14503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:21:24.516256   14503 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:21:24.525186   14503 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:21:24.533399   14503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:21:24.533461   14503 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:21:24.541349   14503 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 10:21:24.578100   14503 kubeadm.go:310] W0923 10:21:24.577939   15376 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:21:24.578656   14503 kubeadm.go:310] W0923 10:21:24.578599   15376 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:21:24.580257   14503 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:21:24.580283   14503 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:21:24.671178   14503 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:21:24.671303   14503 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:21:24.671316   14503 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:21:24.671321   14503 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:21:24.681280   14503 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:21:24.685058   14503 out.go:235]   - Generating certificates and keys ...
	I0923 10:21:24.685117   14503 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:21:24.685129   14503 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:21:24.851793   14503 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:21:25.045916   14503 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:21:25.084012   14503 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:21:25.345658   14503 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:21:25.466973   14503 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:21:25.467008   14503 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-14] and IPs [10.150.0.16 127.0.0.1 ::1]
	I0923 10:21:25.530655   14503 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:21:25.530785   14503 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-14] and IPs [10.150.0.16 127.0.0.1 ::1]
	I0923 10:21:25.745130   14503 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:21:25.932003   14503 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:21:26.062990   14503 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:21:26.063162   14503 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:21:26.262182   14503 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:21:26.549221   14503 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:21:26.716717   14503 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:21:26.766183   14503 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:21:27.083988   14503 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:21:27.084541   14503 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:21:27.086810   14503 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:21:27.088923   14503 out.go:235]   - Booting up control plane ...
	I0923 10:21:27.088960   14503 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:21:27.088982   14503 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:21:27.089435   14503 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:21:27.114718   14503 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:21:27.120016   14503 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:21:27.120050   14503 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:21:27.357126   14503 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:21:27.357155   14503 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:21:27.858879   14503 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.724184ms
	I0923 10:21:27.858905   14503 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:21:32.860566   14503 kubeadm.go:310] [api-check] The API server is healthy after 5.001673562s
	I0923 10:21:32.873499   14503 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:21:32.885632   14503 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:21:32.907676   14503 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:21:32.907702   14503 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-14 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:21:32.916273   14503 kubeadm.go:310] [bootstrap-token] Using token: 159jws.lpwtfljcxiulbgh7
	I0923 10:21:32.917661   14503 out.go:235]   - Configuring RBAC rules ...
	I0923 10:21:32.917688   14503 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:21:32.921368   14503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:21:32.928142   14503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:21:32.930689   14503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:21:32.934937   14503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:21:32.937751   14503 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:21:33.268626   14503 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:21:33.687519   14503 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:21:34.267786   14503 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:21:34.268645   14503 kubeadm.go:310] 
	I0923 10:21:34.268657   14503 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:21:34.268661   14503 kubeadm.go:310] 
	I0923 10:21:34.268666   14503 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:21:34.268670   14503 kubeadm.go:310] 
	I0923 10:21:34.268675   14503 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:21:34.268679   14503 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:21:34.268682   14503 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:21:34.268686   14503 kubeadm.go:310] 
	I0923 10:21:34.268689   14503 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:21:34.268693   14503 kubeadm.go:310] 
	I0923 10:21:34.268697   14503 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:21:34.268700   14503 kubeadm.go:310] 
	I0923 10:21:34.268704   14503 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:21:34.268707   14503 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:21:34.268711   14503 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:21:34.268715   14503 kubeadm.go:310] 
	I0923 10:21:34.268719   14503 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:21:34.268723   14503 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:21:34.268727   14503 kubeadm.go:310] 
	I0923 10:21:34.268730   14503 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 159jws.lpwtfljcxiulbgh7 \
	I0923 10:21:34.268735   14503 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:912df576ac3a30e2c8fe7e582ef2e1cefa71f1abe1ae22d12bbdb9d33952da04 \
	I0923 10:21:34.268739   14503 kubeadm.go:310] 	--control-plane 
	I0923 10:21:34.268743   14503 kubeadm.go:310] 
	I0923 10:21:34.268747   14503 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:21:34.268751   14503 kubeadm.go:310] 
	I0923 10:21:34.268755   14503 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 159jws.lpwtfljcxiulbgh7 \
	I0923 10:21:34.268759   14503 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:912df576ac3a30e2c8fe7e582ef2e1cefa71f1abe1ae22d12bbdb9d33952da04 
	I0923 10:21:34.271529   14503 cni.go:84] Creating CNI manager for ""
	I0923 10:21:34.271553   14503 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:21:34.273534   14503 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 10:21:34.274871   14503 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0923 10:21:34.286762   14503 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 10:21:34.286920   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4250734174 /etc/cni/net.d/1-k8s.conflist
	I0923 10:21:34.298046   14503 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:21:34.298186   14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-14 minikube.k8s.io/updated_at=2024_09_23T10_21_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0923 10:21:34.298208   14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:34.306828   14503 ops.go:34] apiserver oom_adj: -16
	I0923 10:21:34.367518   14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:34.867819   14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:35.368311   14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:35.868488   14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:36.368174   14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:36.868201   14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:37.368120   14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:37.867711   14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:21:37.939149   14503 kubeadm.go:1113] duration metric: took 3.641054991s to wait for elevateKubeSystemPrivileges
	I0923 10:21:37.939185   14503 kubeadm.go:394] duration metric: took 13.526545701s to StartCluster
	I0923 10:21:37.939206   14503 settings.go:142] acquiring lock: {Name:mk859aef9f68053644345f1d9ec880181c903239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:37.939269   14503 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19689-3689/kubeconfig
	I0923 10:21:37.939984   14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/kubeconfig: {Name:mk51e817e2092847322764330e83dc7db829c6ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:37.940203   14503 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:21:37.940254   14503 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 10:21:37.940341   14503 addons.go:69] Setting yakd=true in profile "minikube"
	I0923 10:21:37.940355   14503 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0923 10:21:37.940370   14503 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0923 10:21:37.940378   14503 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0923 10:21:37.940391   14503 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0923 10:21:37.940381   14503 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0923 10:21:37.940415   14503 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0923 10:21:37.940362   14503 addons.go:234] Setting addon yakd=true in "minikube"
	I0923 10:21:37.940421   14503 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:21:37.940437   14503 host.go:66] Checking if "minikube" exists ...
	I0923 10:21:37.940453   14503 host.go:66] Checking if "minikube" exists ...
	I0923 10:21:37.940441   14503 addons.go:69] Setting volcano=true in profile "minikube"
	I0923 10:21:37.940468   14503 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0923 10:21:37.940486   14503 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0923 10:21:37.940490   14503 addons.go:234] Setting addon volcano=true in "minikube"
	I0923 10:21:37.940488   14503 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0923 10:21:37.940516   14503 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0923 10:21:37.940540   14503 host.go:66] Checking if "minikube" exists ...
	I0923 10:21:37.940541   14503 host.go:66] Checking if "minikube" exists ...
	I0923 10:21:37.941041   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:37.941067   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:37.941070   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:37.941087   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:37.941104   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:37.941126   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:37.941145   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:37.941161   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:37.941198   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:37.941256   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:37.941278   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:37.941290   14503 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0923 10:21:37.941310   14503 mustload.go:65] Loading cluster: minikube
	I0923 10:21:37.941315   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:37.941496   14503 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:21:37.941586   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:37.941616   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:37.941625   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:37.941640   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:37.941668   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:37.941764   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:37.942221   14503 out.go:177] * Configuring local host environment ...
	I0923 10:21:37.942383   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:37.942403   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:37.942442   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:37.940415   14503 host.go:66] Checking if "minikube" exists ...
	I0923 10:21:37.942475   14503 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0923 10:21:37.942492   14503 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0923 10:21:37.942519   14503 host.go:66] Checking if "minikube" exists ...
	I0923 10:21:37.942878   14503 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0923 10:21:37.942932   14503 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0923 10:21:37.942968   14503 host.go:66] Checking if "minikube" exists ...
	I0923 10:21:37.943207   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:37.943221   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:37.943250   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:37.943279   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:37.943293   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:37.943342   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:37.943539   14503 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0923 10:21:37.943563   14503 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0923 10:21:37.943588   14503 host.go:66] Checking if "minikube" exists ...
	W0923 10:21:37.943697   14503 out.go:270] * 
	W0923 10:21:37.943715   14503 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0923 10:21:37.943746   14503 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0923 10:21:37.943759   14503 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0923 10:21:37.943785   14503 out.go:270] * 
	W0923 10:21:37.943858   14503 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0923 10:21:37.943869   14503 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0923 10:21:37.943875   14503 out.go:270] * 
	W0923 10:21:37.943897   14503 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0923 10:21:37.943911   14503 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0923 10:21:37.943944   14503 out.go:270] * 
	W0923 10:21:37.943953   14503 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0923 10:21:37.943987   14503 start.go:235] Will wait 6m0s for node &{Name: IP:10.150.0.16 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:21:37.942461   14503 addons.go:69] Setting registry=true in profile "minikube"
	I0923 10:21:37.944500   14503 addons.go:234] Setting addon registry=true in "minikube"
	I0923 10:21:37.944575   14503 host.go:66] Checking if "minikube" exists ...
	I0923 10:21:37.945226   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:37.945246   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:37.945275   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:37.945350   14503 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0923 10:21:37.945375   14503 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0923 10:21:37.945397   14503 host.go:66] Checking if "minikube" exists ...
	I0923 10:21:37.945880   14503 out.go:177] * Verifying Kubernetes components...
	I0923 10:21:37.947410   14503 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 10:21:37.961276   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:37.961374   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:37.961474   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:37.961979   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:37.962595   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:37.977055   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:37.979263   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:37.979294   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:37.979328   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:37.980028   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:37.980051   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:37.980082   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:37.980366   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:37.980382   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:37.980413   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:37.985305   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:37.985373   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:37.987781   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:37.987862   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:37.993641   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:37.993718   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:37.995086   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:37.995216   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:37.995296   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:37.995350   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:37.997737   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:38.001545   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:38.001616   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:38.008676   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:38.010658   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:38.010875   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:38.016486   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:38.016537   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:38.018176   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:38.018202   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:38.024504   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:38.024533   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:38.025117   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:38.025139   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:38.026399   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:38.028550   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:38.030068   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:38.030273   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:38.031272   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:38.032430   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:38.032489   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:38.033585   14503 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0923 10:21:38.033629   14503 host.go:66] Checking if "minikube" exists ...
	I0923 10:21:38.033840   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:38.033996   14503 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0923 10:21:38.034040   14503 host.go:66] Checking if "minikube" exists ...
	I0923 10:21:38.038793   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:38.038825   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:38.040028   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:38.040854   14503 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 10:21:38.041435   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:38.041462   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:38.042507   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:38.042972   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:38.042995   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:38.043024   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:38.040049   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:38.043195   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:38.043500   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:38.044189   14503 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:21:38.044208   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:38.044239   14503 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 10:21:38.044277   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:38.044432   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1246490440 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:21:38.046494   14503 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:21:38.046837   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:38.048602   14503 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0923 10:21:38.049349   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:38.049370   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:38.049812   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:38.049857   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:38.049870   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:38.049883   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:38.051467   14503 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0923 10:21:38.052689   14503 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0923 10:21:38.054046   14503 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:21:38.054071   14503 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0923 10:21:38.054078   14503 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:21:38.054173   14503 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:21:38.055240   14503 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:21:38.055281   14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0923 10:21:38.055730   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube707534564 /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:21:38.057880   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:38.057935   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:38.057889   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:38.064632   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:38.065750   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:38.067312   14503 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 10:21:38.068433   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:38.068454   14503 host.go:66] Checking if "minikube" exists ...
	I0923 10:21:38.068641   14503 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:21:38.068668   14503 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 10:21:38.068809   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3746917818 /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:21:38.070167   14503 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 10:21:38.071603   14503 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:21:38.071634   14503 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 10:21:38.071759   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1057678417 /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:21:38.074063   14503 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:21:38.074088   14503 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 10:21:38.074403   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1547504557 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:21:38.075675   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:38.075695   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:38.083687   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:38.083932   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:38.083953   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:38.084435   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:38.084493   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:38.085202   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:38.085220   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:38.085684   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:38.087963   14503 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 10:21:38.089534   14503 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 10:21:38.091333   14503 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:21:38.092616   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:38.092620   14503 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:21:38.092645   14503 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 10:21:38.092768   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2739954059 /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:21:38.093284   14503 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 10:21:38.094310   14503 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 10:21:38.094535   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:38.095183   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:38.095892   14503 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 10:21:38.095952   14503 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:21:38.095975   14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 10:21:38.096102   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1590517906 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:21:38.096295   14503 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 10:21:38.097483   14503 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 10:21:38.097543   14503 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 10:21:38.097961   14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:21:38.100458   14503 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:21:38.100501   14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 10:21:38.100637   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3362508289 /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:21:38.102550   14503 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 10:21:38.103934   14503 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 10:21:38.105329   14503 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 10:21:38.106572   14503 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:21:38.106606   14503 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 10:21:38.106718   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3016158629 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:21:38.109647   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:38.109711   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:38.112788   14503 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:21:38.112819   14503 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 10:21:38.112966   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2370514550 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:21:38.113198   14503 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:21:38.113228   14503 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 10:21:38.113336   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4017615969 /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:21:38.116562   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:38.116619   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:38.116798   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:38.116840   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:38.118253   14503 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:21:38.118273   14503 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 10:21:38.118368   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1751057690 /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:21:38.118635   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:38.118661   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:38.123426   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:38.125192   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:38.125216   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:38.125723   14503 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 10:21:38.127186   14503 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:21:38.127215   14503 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 10:21:38.127344   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube423584811 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:21:38.132074   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:38.134022   14503 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 10:21:38.135257   14503 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 10:21:38.135287   14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 10:21:38.135423   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3071771337 /etc/kubernetes/addons/deployment.yaml
	I0923 10:21:38.137752   14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:21:38.143398   14503 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:21:38.143433   14503 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 10:21:38.143574   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3484133606 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:21:38.145608   14503 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:21:38.145633   14503 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 10:21:38.145729   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2319206378 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:21:38.220476   14503 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 10:21:38.220516   14503 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 10:21:38.220561   14503 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:21:38.220596   14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 10:21:38.220647   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1180843149 /etc/kubernetes/addons/ig-role.yaml
	I0923 10:21:38.220922   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube703366994 /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:21:38.222619   14503 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:21:38.222653   14503 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 10:21:38.222883   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3828019453 /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:21:38.223276   14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 10:21:38.226246   14503 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:21:38.226286   14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 10:21:38.226494   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1447327316 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:21:38.227916   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:38.227949   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:38.229640   14503 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:21:38.229672   14503 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 10:21:38.229866   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1613859934 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:21:38.233867   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:38.233920   14503 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:21:38.233938   14503 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0923 10:21:38.233946   14503 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0923 10:21:38.233991   14503 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:21:38.240370   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:38.240404   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:38.245559   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:38.247457   14503 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 10:21:38.247637   14503 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:21:38.247666   14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 10:21:38.248149   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1416855435 /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:21:38.250542   14503 out.go:177]   - Using image docker.io/busybox:stable
	I0923 10:21:38.251844   14503 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:21:38.251876   14503 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 10:21:38.251933   14503 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:21:38.251957   14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 10:21:38.252001   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube260296824 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:21:38.252073   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2246154247 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:21:38.253608   14503 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:21:38.253651   14503 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 10:21:38.253790   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2527539954 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:21:38.256524   14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:21:38.256844   14503 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:21:38.256993   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube584424061 /etc/kubernetes/addons/storageclass.yaml
	I0923 10:21:38.263302   14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:21:38.263524   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3595981532 /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:21:38.272541   14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:21:38.277769   14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:21:38.286678   14503 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:21:38.286725   14503 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 10:21:38.286809   14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:21:38.286874   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube765725782 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:21:38.291400   14503 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:21:38.291441   14503 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 10:21:38.291606   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2689698310 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:21:38.308983   14503 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:21:38.309027   14503 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 10:21:38.309165   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1470380499 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:21:38.309719   14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:21:38.321989   14503 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:21:38.322041   14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 10:21:38.322260   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1265008389 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:21:38.332179   14503 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:21:38.332220   14503 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 10:21:38.332398   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3455428546 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:21:38.335259   14503 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:21:38.335288   14503 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 10:21:38.335406   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube536218479 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:21:38.351497   14503 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:21:38.351535   14503 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 10:21:38.351692   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3160167729 /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:21:38.358319   14503 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0923 10:21:38.358629   14503 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:21:38.358654   14503 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 10:21:38.358789   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1681517526 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:21:38.360634   14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:21:38.368087   14503 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:21:38.368121   14503 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 10:21:38.368255   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1546842042 /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:21:38.394341   14503 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0923 10:21:38.440062   14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:21:38.441391   14503 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:21:38.441419   14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 10:21:38.441538   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2448725208 /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:21:38.447824   14503 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-14" to be "Ready" ...
	I0923 10:21:38.452045   14503 node_ready.go:49] node "ubuntu-20-agent-14" has status "Ready":"True"
	I0923 10:21:38.452068   14503 node_ready.go:38] duration metric: took 4.126904ms for node "ubuntu-20-agent-14" to be "Ready" ...
	I0923 10:21:38.452079   14503 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:21:38.459817   14503 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:38.472260   14503 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:21:38.472292   14503 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 10:21:38.472427   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1973014927 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:21:38.495977   14503 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:21:38.496009   14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 10:21:38.496163   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3968753410 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:21:38.515500   14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:21:38.525105   14503 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:21:38.525142   14503 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 10:21:38.526396   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1694692617 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:21:38.572191   14503 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:21:38.572222   14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 10:21:38.572353   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube777964739 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:21:38.591594   14503 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:21:38.591630   14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 10:21:38.591786   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2691097496 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:21:38.643613   14503 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:21:38.643663   14503 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 10:21:38.644378   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube458797997 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:21:38.707122   14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:21:38.903443   14503 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0923 10:21:38.906200   14503 addons.go:475] Verifying addon registry=true in "minikube"
	I0923 10:21:38.908775   14503 out.go:177] * Verifying registry addon...
	I0923 10:21:38.911934   14503 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 10:21:38.916883   14503 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:21:38.916908   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:39.260753   14503 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0923 10:21:39.427690   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:39.448260   14503 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.175661017s)
	I0923 10:21:39.517787   14503 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.077669266s)
	I0923 10:21:39.517824   14503 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0923 10:21:39.522220   14503 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.006633645s)
	I0923 10:21:39.921720   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:40.198768   14503 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.838083967s)
	W0923 10:21:40.198947   14503 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:21:40.198990   14503 retry.go:31] will retry after 252.085237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:21:40.415565   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:40.451990   14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:21:40.465723   14503 pod_ready.go:103] pod "etcd-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:40.927244   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:41.058966   14503 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.960954263s)
	I0923 10:21:41.394252   14503 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.686972504s)
	I0923 10:21:41.394310   14503 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0923 10:21:41.398029   14503 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 10:21:41.401452   14503 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 10:21:41.406508   14503 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:21:41.406542   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:41.416426   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:41.597886   14503 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.145807847s)
	I0923 10:21:41.909255   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:41.916166   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:42.406457   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:42.416061   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:42.466592   14503 pod_ready.go:103] pod "etcd-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:42.906788   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:42.916154   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:43.407086   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:43.416317   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:43.906645   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:43.915943   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:44.407111   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:44.415313   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:44.906059   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:44.916094   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:44.965990   14503 pod_ready.go:103] pod "etcd-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:45.077264   14503 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 10:21:45.077408   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3374174569 /var/lib/minikube/google_application_credentials.json
	I0923 10:21:45.089201   14503 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 10:21:45.089330   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2345360888 /var/lib/minikube/google_cloud_project
	I0923 10:21:45.101068   14503 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0923 10:21:45.101126   14503 host.go:66] Checking if "minikube" exists ...
	I0923 10:21:45.101958   14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
	I0923 10:21:45.101982   14503 api_server.go:166] Checking apiserver status ...
	I0923 10:21:45.102021   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:45.123893   14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
	I0923 10:21:45.137507   14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
	I0923 10:21:45.137581   14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
	I0923 10:21:45.148293   14503 api_server.go:204] freezer state: "THAWED"
	I0923 10:21:45.148321   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:45.152648   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:45.152803   14503 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 10:21:45.155723   14503 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:21:45.157298   14503 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 10:21:45.158581   14503 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:21:45.158626   14503 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 10:21:45.158778   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2179341002 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:21:45.169606   14503 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:21:45.169645   14503 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 10:21:45.169780   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube497909863 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:21:45.181323   14503 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:21:45.181354   14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 10:21:45.181461   14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3084832486 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:21:45.192626   14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:21:45.406175   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:45.415099   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:46.083415   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:46.084393   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:46.194451   14503 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0923 10:21:46.195974   14503 out.go:177] * Verifying gcp-auth addon...
	I0923 10:21:46.198098   14503 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 10:21:46.200381   14503 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:21:46.405750   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:46.416216   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:46.905397   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:46.915445   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:47.406770   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:47.415781   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:47.465907   14503 pod_ready.go:103] pod "etcd-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"False"
	I0923 10:21:47.906003   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:47.916353   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:48.507753   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:48.508565   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:48.908273   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:48.967062   14503 pod_ready.go:93] pod "etcd-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"True"
	I0923 10:21:48.967089   14503 pod_ready.go:82] duration metric: took 10.507247017s for pod "etcd-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:48.967099   14503 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:48.971977   14503 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"True"
	I0923 10:21:48.972004   14503 pod_ready.go:82] duration metric: took 4.897345ms for pod "kube-apiserver-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:48.972018   14503 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:48.977715   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:49.405863   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:49.415114   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:49.478325   14503 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"True"
	I0923 10:21:49.478344   14503 pod_ready.go:82] duration metric: took 506.318863ms for pod "kube-controller-manager-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:49.478354   14503 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:49.482300   14503 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"True"
	I0923 10:21:49.482322   14503 pod_ready.go:82] duration metric: took 3.961039ms for pod "kube-scheduler-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
	I0923 10:21:49.482333   14503 pod_ready.go:39] duration metric: took 11.030240368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:21:49.482355   14503 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:21:49.482413   14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:21:49.501517   14503 api_server.go:72] duration metric: took 11.557408673s to wait for apiserver process to appear ...
	I0923 10:21:49.501548   14503 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:21:49.501577   14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
	I0923 10:21:49.505603   14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
	ok
	I0923 10:21:49.506538   14503 api_server.go:141] control plane version: v1.31.1
	I0923 10:21:49.506565   14503 api_server.go:131] duration metric: took 5.009313ms to wait for apiserver health ...
	I0923 10:21:49.506576   14503 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:21:49.514838   14503 system_pods.go:59] 16 kube-system pods found
	I0923 10:21:49.514904   14503 system_pods.go:61] "coredns-7c65d6cfc9-5wzm7" [d5873fad-13d1-45af-a03b-45c4b855def2] Running
	I0923 10:21:49.514917   14503 system_pods.go:61] "csi-hostpath-attacher-0" [3a0eea15-a39e-4405-8fbf-a222f5615313] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:21:49.514932   14503 system_pods.go:61] "csi-hostpath-resizer-0" [cabd0f23-eb05-4c15-b63d-a277f022f80b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:21:49.514948   14503 system_pods.go:61] "csi-hostpathplugin-nfj4v" [79523175-97ee-406e-8d74-e3335dcfa6af] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:21:49.514958   14503 system_pods.go:61] "etcd-ubuntu-20-agent-14" [3a571447-937e-4b39-9db1-70ea79ff7a4a] Running
	I0923 10:21:49.514964   14503 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-14" [b02db37b-9c5b-4647-8204-a20e7ed4e588] Running
	I0923 10:21:49.514970   14503 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-14" [a111e4cd-7725-4c2a-af1f-562e8673fdc1] Running
	I0923 10:21:49.514975   14503 system_pods.go:61] "kube-proxy-9rf8g" [8b8f2ed9-4e48-4f1b-90db-03cbeacc08a1] Running
	I0923 10:21:49.514980   14503 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-14" [81113013-462f-4e54-869e-86ea8ab47602] Running
	I0923 10:21:49.514990   14503 system_pods.go:61] "metrics-server-84c5f94fbc-nnrdh" [f57d2252-a248-4969-9111-da3afb4eebd3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:21:49.514995   14503 system_pods.go:61] "nvidia-device-plugin-daemonset-t8s2p" [7c3d0947-5713-4d85-a7a0-09660e93cfcd] Running
	I0923 10:21:49.515003   14503 system_pods.go:61] "registry-66c9cd494c-8hvdw" [678aa223-edb6-4a6c-b3e5-5d95e0ea40f6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:21:49.515010   14503 system_pods.go:61] "registry-proxy-4nzb4" [35894a53-f7e8-4743-9eea-200f3986fcd6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:21:49.515016   14503 system_pods.go:61] "snapshot-controller-56fcc65765-q8vm4" [29d8b561-2be7-4cb9-8726-2ca9502c446f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:21:49.515023   14503 system_pods.go:61] "snapshot-controller-56fcc65765-w9bmc" [d74b1548-465c-47f2-b66f-97290b19df8a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:21:49.515026   14503 system_pods.go:61] "storage-provisioner" [18decbc8-e338-4af9-82cc-f90640dc8db2] Running
	I0923 10:21:49.515034   14503 system_pods.go:74] duration metric: took 8.452445ms to wait for pod list to return data ...
	I0923 10:21:49.515042   14503 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:21:49.517751   14503 default_sa.go:45] found service account: "default"
	I0923 10:21:49.517772   14503 default_sa.go:55] duration metric: took 2.724884ms for default service account to be created ...
	I0923 10:21:49.517780   14503 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:21:49.570028   14503 system_pods.go:86] 16 kube-system pods found
	I0923 10:21:49.570079   14503 system_pods.go:89] "coredns-7c65d6cfc9-5wzm7" [d5873fad-13d1-45af-a03b-45c4b855def2] Running
	I0923 10:21:49.570093   14503 system_pods.go:89] "csi-hostpath-attacher-0" [3a0eea15-a39e-4405-8fbf-a222f5615313] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:21:49.570395   14503 system_pods.go:89] "csi-hostpath-resizer-0" [cabd0f23-eb05-4c15-b63d-a277f022f80b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:21:49.570432   14503 system_pods.go:89] "csi-hostpathplugin-nfj4v" [79523175-97ee-406e-8d74-e3335dcfa6af] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:21:49.570443   14503 system_pods.go:89] "etcd-ubuntu-20-agent-14" [3a571447-937e-4b39-9db1-70ea79ff7a4a] Running
	I0923 10:21:49.570451   14503 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-14" [b02db37b-9c5b-4647-8204-a20e7ed4e588] Running
	I0923 10:21:49.570462   14503 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-14" [a111e4cd-7725-4c2a-af1f-562e8673fdc1] Running
	I0923 10:21:49.570469   14503 system_pods.go:89] "kube-proxy-9rf8g" [8b8f2ed9-4e48-4f1b-90db-03cbeacc08a1] Running
	I0923 10:21:49.570480   14503 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-14" [81113013-462f-4e54-869e-86ea8ab47602] Running
	I0923 10:21:49.570491   14503 system_pods.go:89] "metrics-server-84c5f94fbc-nnrdh" [f57d2252-a248-4969-9111-da3afb4eebd3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:21:49.570506   14503 system_pods.go:89] "nvidia-device-plugin-daemonset-t8s2p" [7c3d0947-5713-4d85-a7a0-09660e93cfcd] Running
	I0923 10:21:49.570522   14503 system_pods.go:89] "registry-66c9cd494c-8hvdw" [678aa223-edb6-4a6c-b3e5-5d95e0ea40f6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:21:49.570538   14503 system_pods.go:89] "registry-proxy-4nzb4" [35894a53-f7e8-4743-9eea-200f3986fcd6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:21:49.570554   14503 system_pods.go:89] "snapshot-controller-56fcc65765-q8vm4" [29d8b561-2be7-4cb9-8726-2ca9502c446f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:21:49.570572   14503 system_pods.go:89] "snapshot-controller-56fcc65765-w9bmc" [d74b1548-465c-47f2-b66f-97290b19df8a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:21:49.570587   14503 system_pods.go:89] "storage-provisioner" [18decbc8-e338-4af9-82cc-f90640dc8db2] Running
	I0923 10:21:49.570600   14503 system_pods.go:126] duration metric: took 52.812878ms to wait for k8s-apps to be running ...
	I0923 10:21:49.570612   14503 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:21:49.570678   14503 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:21:49.587814   14503 system_svc.go:56] duration metric: took 17.18863ms WaitForService to wait for kubelet
	I0923 10:21:49.587851   14503 kubeadm.go:582] duration metric: took 11.643747324s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:21:49.587876   14503 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:21:49.764259   14503 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0923 10:21:49.764294   14503 node_conditions.go:123] node cpu capacity is 8
	I0923 10:21:49.764308   14503 node_conditions.go:105] duration metric: took 176.426384ms to run NodePressure ...
	I0923 10:21:49.764322   14503 start.go:241] waiting for startup goroutines ...
	I0923 10:21:49.906521   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:49.915933   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:50.405140   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:50.415221   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:50.906978   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:50.915763   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:51.406053   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:51.415144   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:51.907475   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:51.914953   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:52.405726   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:52.415656   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:52.906194   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:52.915150   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:53.405748   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:53.415910   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:53.906513   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:53.915924   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:54.406225   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:54.416164   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:54.905488   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:54.915562   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:55.406820   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:55.415665   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:55.906387   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:55.915427   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:21:56.406971   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:56.415637   14503 kapi.go:107] duration metric: took 17.503706177s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 10:21:56.906672   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:57.406782   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:57.905844   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:58.406679   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:58.906888   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:59.407712   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:21:59.906319   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:00.406397   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:00.905457   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:01.406653   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:01.907480   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:02.405985   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:02.906867   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:03.406243   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:03.906102   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:04.406339   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:04.906013   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:05.406140   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:05.906651   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:06.406703   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:06.906665   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:07.406169   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:07.906078   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:08.407243   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:08.906711   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:09.405824   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:09.905690   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:10.407459   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:10.907268   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:11.406613   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:11.905038   14503 kapi.go:107] duration metric: took 30.503589218s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 10:22:27.701454   14503 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:22:27.701485   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:28.201668   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:28.702152   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:29.201240   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:29.700555   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:30.201894   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:30.702345   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:31.202002   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:31.701122   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:32.201037   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:32.701317   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:33.200793   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:33.702028   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:34.201583   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:34.701290   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:35.202604   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:35.701699   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:36.201941   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:36.702056   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:37.201077   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:37.700789   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:38.201104   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:38.700621   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:39.201685   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:39.701535   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:40.201586   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:40.701820   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:41.201406   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:41.700970   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:42.201917   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:42.702177   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:43.201160   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:43.701568   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:44.201598   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:44.701363   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:45.201083   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:45.700853   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:46.200881   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:46.701700   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:47.201549   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:47.701006   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:48.200907   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:48.701122   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:49.201471   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:49.701247   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:50.201325   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:50.701438   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:51.201134   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:51.700876   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:52.201678   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:52.701894   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:53.220060   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:53.703774   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:54.201081   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:54.701812   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:55.202019   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:55.701194   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:56.201198   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:56.700780   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:57.201886   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:57.701957   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:58.200797   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:58.702247   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:59.201177   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:59.700825   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:00.201713   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:00.702049   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:01.202019   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:01.701014   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:02.201226   14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:02.701493   14503 kapi.go:107] duration metric: took 1m16.503392252s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 10:23:02.703705   14503 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0923 10:23:02.705378   14503 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 10:23:02.707040   14503 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 10:23:02.708912   14503 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, storage-provisioner, yakd, storage-provisioner-rancher, metrics-server, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0923 10:23:02.710572   14503 addons.go:510] duration metric: took 1m24.77033752s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner storage-provisioner yakd storage-provisioner-rancher metrics-server inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0923 10:23:02.710628   14503 start.go:246] waiting for cluster config update ...
	I0923 10:23:02.710651   14503 start.go:255] writing updated cluster config ...
	I0923 10:23:02.710989   14503 exec_runner.go:51] Run: rm -f paused
	I0923 10:23:02.756604   14503 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:23:02.758717   14503 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Mon 2024-08-26 20:51:04 UTC, end at Mon 2024-09-23 10:32:55 UTC. --
	Sep 23 10:25:04 ubuntu-20-agent-14 cri-dockerd[15045]: time="2024-09-23T10:25:04Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 10:25:06 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:25:06.289362488Z" level=info msg="ignoring event" container=7f298b17f6ef65355bf64e73777e1e5e98f9121a93deedae419228f701a7e404 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:25:07 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:25:07.811464067Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=83935512cb0e36c8 traceID=3dd788051f76aaa6ebee96a31b148398
	Sep 23 10:25:07 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:25:07.815270548Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=83935512cb0e36c8 traceID=3dd788051f76aaa6ebee96a31b148398
	Sep 23 10:26:32 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:26:32.817014659Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=8a11cc7f365f88f6 traceID=fa8bf6c20e5e07ea007b4d1ec84d7e89
	Sep 23 10:26:32 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:26:32.819193473Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=8a11cc7f365f88f6 traceID=fa8bf6c20e5e07ea007b4d1ec84d7e89
	Sep 23 10:27:47 ubuntu-20-agent-14 cri-dockerd[15045]: time="2024-09-23T10:27:47Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 10:27:49 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:27:49.363106193Z" level=info msg="ignoring event" container=19bc9bdaffa6ca1785506c4e9a9ebb2a8ba015cf365fda6d059b5d3a6aec0814 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:29:20 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:29:20.810253208Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=521730c1c60d4162 traceID=d8d809143cecf3f3830d65801de13869
	Sep 23 10:29:20 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:29:20.812691319Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=521730c1c60d4162 traceID=d8d809143cecf3f3830d65801de13869
	Sep 23 10:31:54 ubuntu-20-agent-14 cri-dockerd[15045]: time="2024-09-23T10:31:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f41a0ce81dad7e64e942ffd0ab659aa0bf5f6b16796f020c435fcdcbaa231cbb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 23 10:31:54 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:31:54.725607832Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=8a767bd34081992d traceID=32016fc8fcd97f2d60da52ff6834b925
	Sep 23 10:31:54 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:31:54.727862192Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=8a767bd34081992d traceID=32016fc8fcd97f2d60da52ff6834b925
	Sep 23 10:31:55 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:31:55.745458427Z" level=info msg="ignoring event" container=f41a0ce81dad7e64e942ffd0ab659aa0bf5f6b16796f020c435fcdcbaa231cbb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:31:55 ubuntu-20-agent-14 cri-dockerd[15045]: time="2024-09-23T10:31:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8ecd0359970545bb26f3f592747b58ffe3527516bf02a9d1ed47e1f7e0175dce/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 23 10:32:07 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:07.819425738Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=429ec37243ce98e4 traceID=0bfaf718ae9aa0af010f760207d714e3
	Sep 23 10:32:07 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:07.821715925Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=429ec37243ce98e4 traceID=0bfaf718ae9aa0af010f760207d714e3
	Sep 23 10:32:36 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:36.813953183Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=a6d3e2506f50bafc traceID=15646b462a54bf9dfe4bcd65f35b1522
	Sep 23 10:32:36 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:36.816413210Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=a6d3e2506f50bafc traceID=15646b462a54bf9dfe4bcd65f35b1522
	Sep 23 10:32:54 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:54.271672386Z" level=info msg="ignoring event" container=8ecd0359970545bb26f3f592747b58ffe3527516bf02a9d1ed47e1f7e0175dce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:32:54 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:54.561368758Z" level=info msg="ignoring event" container=904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:32:54 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:54.619323276Z" level=info msg="ignoring event" container=c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:32:54 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:54.723900181Z" level=info msg="ignoring event" container=76622f96976a61ebb90e298bb01c77ca97f03d8efc0d4be247ee82e8bd518ed9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:32:54 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:54.785114696Z" level=info msg="ignoring event" container=3d503d7acf0011f2fdf522e648365ff5ef0fb4386566446c3e637d26ce511ce4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:32:54 ubuntu-20-agent-14 cri-dockerd[15045]: time="2024-09-23T10:32:54Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	22352cc886702       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            1 second ago        Running             gadget                                   7                   8eef71572cf09       gadget-w2hzg
	19bc9bdaffa6c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            5 minutes ago       Exited              gadget                                   6                   8eef71572cf09       gadget-w2hzg
	c300d56f5b854       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   b643a70104ccf       gcp-auth-89d5ffd79-6kvbf
	bfad91385c2d2       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   046b887c2a0d6       csi-hostpathplugin-nfj4v
	15acaecbec354       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   046b887c2a0d6       csi-hostpathplugin-nfj4v
	a5ec093612b98       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   046b887c2a0d6       csi-hostpathplugin-nfj4v
	46e9cced4c57d       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   046b887c2a0d6       csi-hostpathplugin-nfj4v
	d26c55945b0ac       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   046b887c2a0d6       csi-hostpathplugin-nfj4v
	d8e19afc126d9       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   0ccc27fcecd99       csi-hostpath-resizer-0
	6ca977d99b2bd       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   e0644af257ee8       csi-hostpath-attacher-0
	9a2be67883038       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   046b887c2a0d6       csi-hostpathplugin-nfj4v
	71e3784777015       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   075e5b313127f       snapshot-controller-56fcc65765-w9bmc
	a8d5e1490050d       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   e877e9aa00ffc       snapshot-controller-56fcc65765-q8vm4
	4cbb9e4f61fd6       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   4e8e120c340de       local-path-provisioner-86d989889c-x4dtk
	64d8f5dd44360       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        10 minutes ago      Running             metrics-server                           0                   b908ae4e50b0c       metrics-server-84c5f94fbc-nnrdh
	0a5c8535fcb39       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   66be03f8a385d       yakd-dashboard-67d98fc6b-kf48r
	c46422e23a32e       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              11 minutes ago      Exited              registry-proxy                           0                   3d503d7acf001       registry-proxy-4nzb4
	904225ddf913e       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             11 minutes ago      Exited              registry                                 0                   76622f96976a6       registry-66c9cd494c-8hvdw
	e3d21975c1b48       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               11 minutes ago      Running             cloud-spanner-emulator                   0                   1d5c3407a0954       cloud-spanner-emulator-5b584cc74-psstj
	4ee978a90a8ab       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   b88a054245569       nvidia-device-plugin-daemonset-t8s2p
	e97d25581fbca       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   24c635eb81040       coredns-7c65d6cfc9-5wzm7
	29d253e8d623a       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   92a657b41c8a5       storage-provisioner
	d4b4134082f2d       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   3b0d433197544       kube-proxy-9rf8g
	e4810d0b22eb9       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   f689bd81db477       kube-apiserver-ubuntu-20-agent-14
	7c0ce9c202251       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   56ee3378f5d5f       kube-controller-manager-ubuntu-20-agent-14
	c2d044bdb00e2       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   4e232eeccda83       etcd-ubuntu-20-agent-14
	d978992a060ad       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   0bd91dbfc879b       kube-scheduler-ubuntu-20-agent-14
	
	
	==> coredns [e97d25581fbc] <==
	[INFO] 10.244.0.7:38641 - 38636 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000151742s
	[INFO] 10.244.0.7:50028 - 20629 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090517s
	[INFO] 10.244.0.7:50028 - 33680 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000137922s
	[INFO] 10.244.0.7:58387 - 5177 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000100502s
	[INFO] 10.244.0.7:58387 - 41790 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000129977s
	[INFO] 10.244.0.7:55145 - 49590 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000083581s
	[INFO] 10.244.0.7:55145 - 9649 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000121667s
	[INFO] 10.244.0.7:56969 - 56607 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000049261s
	[INFO] 10.244.0.7:56969 - 59932 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000067922s
	[INFO] 10.244.0.7:49267 - 62470 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071772s
	[INFO] 10.244.0.7:49267 - 32257 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101661s
	[INFO] 10.244.0.22:52451 - 16054 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000315526s
	[INFO] 10.244.0.22:59849 - 37057 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000395804s
	[INFO] 10.244.0.22:50017 - 61796 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000191788s
	[INFO] 10.244.0.22:36912 - 37270 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000202263s
	[INFO] 10.244.0.22:36315 - 45300 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000446114s
	[INFO] 10.244.0.22:48482 - 52497 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000513024s
	[INFO] 10.244.0.22:35338 - 16884 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003651566s
	[INFO] 10.244.0.22:55363 - 8433 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004289097s
	[INFO] 10.244.0.22:37918 - 27285 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003006472s
	[INFO] 10.244.0.22:36494 - 24992 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003157242s
	[INFO] 10.244.0.22:60636 - 52283 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003840319s
	[INFO] 10.244.0.22:53898 - 63064 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004447877s
	[INFO] 10.244.0.22:34908 - 34030 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001458917s
	[INFO] 10.244.0.22:47686 - 37552 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001577081s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-14
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-14
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_21_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-14
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-14"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:21:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-14
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:32:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:28:42 +0000   Mon, 23 Sep 2024 10:21:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:28:42 +0000   Mon, 23 Sep 2024 10:21:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:28:42 +0000   Mon, 23 Sep 2024 10:21:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:28:42 +0000   Mon, 23 Sep 2024 10:21:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.150.0.16
	  Hostname:    ubuntu-20-agent-14
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                406ac382-0a98-38ff-f706-d8fe8e823dbb
	  Boot ID:                    d3fe8ac7-d9e0-4b15-b63e-3a53514cb0a6
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     cloud-spanner-emulator-5b584cc74-psstj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-w2hzg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-6kvbf                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-5wzm7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-nfj4v                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-14                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-14             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-14    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9rf8g                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-14             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-nnrdh               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-t8s2p          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-q8vm4          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-w9bmc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-x4dtk       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-kf48r                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node ubuntu-20-agent-14 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node ubuntu-20-agent-14 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node ubuntu-20-agent-14 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node ubuntu-20-agent-14 event: Registered Node ubuntu-20-agent-14 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 8e f7 c2 23 6a 08 06
	[  +0.036412] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 94 41 71 2e 3c 08 06
	[Sep23 10:22] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a ec 0f 99 5e 7c 08 06
	[  +0.877496] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 cf b9 0e 2a 24 08 06
	[  +1.184288] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 23 a3 a7 df ad 08 06
	[  +4.616115] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a df 1e 3a 95 69 08 06
	[  +0.071226] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 30 13 a4 f0 02 08 06
	[  +0.442683] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 e2 99 96 a8 f5 08 06
	[  +4.893312] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 97 6d c6 63 7b 08 06
	[ +36.870122] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 72 fc 2d 44 26 08 06
	[  +0.044307] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e af f9 da 67 0c 08 06
	[Sep23 10:23] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 68 8b a9 f6 46 08 06
	[  +0.000506] IPv4: martian source 10.244.0.22 from 10.244.0.5, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ba 3f d4 2a 70 fd 08 06
	
	
	==> etcd [c2d044bdb00e] <==
	{"level":"info","ts":"2024-09-23T10:21:29.883346Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T10:21:29.883731Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.150.0.16:2379"}
	{"level":"info","ts":"2024-09-23T10:21:45.634448Z","caller":"traceutil/trace.go:171","msg":"trace[1639837950] linearizableReadLoop","detail":"{readStateIndex:834; appliedIndex:832; }","duration":"107.425793ms","start":"2024-09-23T10:21:45.527003Z","end":"2024-09-23T10:21:45.634429Z","steps":["trace[1639837950] 'read index received'  (duration: 39.791325ms)","trace[1639837950] 'applied index is now lower than readState.Index'  (duration: 67.633692ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T10:21:45.634580Z","caller":"traceutil/trace.go:171","msg":"trace[952413697] transaction","detail":"{read_only:false; response_revision:819; number_of_response:1; }","duration":"108.938233ms","start":"2024-09-23T10:21:45.525609Z","end":"2024-09-23T10:21:45.634547Z","steps":["trace[952413697] 'process raft request'  (duration: 108.681745ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:21:45.634627Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.599617ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gcp-auth\" ","response":"range_response_count:1 size:716"}
	{"level":"info","ts":"2024-09-23T10:21:45.634680Z","caller":"traceutil/trace.go:171","msg":"trace[1028306469] range","detail":"{range_begin:/registry/namespaces/gcp-auth; range_end:; response_count:1; response_revision:819; }","duration":"107.676755ms","start":"2024-09-23T10:21:45.526994Z","end":"2024-09-23T10:21:45.634671Z","steps":["trace[1028306469] 'agreement among raft nodes before linearized reading'  (duration: 107.507775ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:21:45.885765Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.680452ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6572038415507393716 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/ranges/serviceips\" mod_revision:751 > success:<request_put:<key:\"/registry/ranges/serviceips\" value_size:130935 >> failure:<request_range:<key:\"/registry/ranges/serviceips\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-23T10:21:45.885841Z","caller":"traceutil/trace.go:171","msg":"trace[928141289] transaction","detail":"{read_only:false; response_revision:820; number_of_response:1; }","duration":"245.980017ms","start":"2024-09-23T10:21:45.639850Z","end":"2024-09-23T10:21:45.885830Z","steps":["trace[928141289] 'process raft request'  (duration: 125.692368ms)","trace[928141289] 'compare'  (duration: 119.500325ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T10:21:46.080894Z","caller":"traceutil/trace.go:171","msg":"trace[2089597629] linearizableReadLoop","detail":"{readStateIndex:838; appliedIndex:836; }","duration":"185.004422ms","start":"2024-09-23T10:21:45.895871Z","end":"2024-09-23T10:21:46.080875Z","steps":["trace[2089597629] 'read index received'  (duration: 61.637409ms)","trace[2089597629] 'applied index is now lower than readState.Index'  (duration: 123.366413ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:21:46.081029Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.570257ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-23T10:21:46.081067Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.975446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ubuntu-20-agent-14\" ","response":"range_response_count:1 size:5788"}
	{"level":"info","ts":"2024-09-23T10:21:46.081157Z","caller":"traceutil/trace.go:171","msg":"trace[358894035] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ubuntu-20-agent-14; range_end:; response_count:1; response_revision:823; }","duration":"119.069826ms","start":"2024-09-23T10:21:45.962076Z","end":"2024-09-23T10:21:46.081146Z","steps":["trace[358894035] 'agreement among raft nodes before linearized reading'  (duration: 118.930277ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:21:46.081088Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.208987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gcp-auth\" ","response":"range_response_count:1 size:716"}
	{"level":"info","ts":"2024-09-23T10:21:46.081239Z","caller":"traceutil/trace.go:171","msg":"trace[2131469492] range","detail":"{range_begin:/registry/namespaces/gcp-auth; range_end:; response_count:1; response_revision:823; }","duration":"185.3612ms","start":"2024-09-23T10:21:45.895867Z","end":"2024-09-23T10:21:46.081228Z","steps":["trace[2131469492] 'agreement among raft nodes before linearized reading'  (duration: 185.102663ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:21:46.081099Z","caller":"traceutil/trace.go:171","msg":"trace[814132744] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:823; }","duration":"177.654464ms","start":"2024-09-23T10:21:45.903435Z","end":"2024-09-23T10:21:46.081090Z","steps":["trace[814132744] 'agreement among raft nodes before linearized reading'  (duration: 177.532411ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:21:46.081037Z","caller":"traceutil/trace.go:171","msg":"trace[1422606710] transaction","detail":"{read_only:false; response_revision:823; number_of_response:1; }","duration":"186.350105ms","start":"2024-09-23T10:21:45.894674Z","end":"2024-09-23T10:21:46.081024Z","steps":["trace[1422606710] 'process raft request'  (duration: 143.806637ms)","trace[1422606710] 'compare'  (duration: 42.278558ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:21:46.081889Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.001137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:21:46.081928Z","caller":"traceutil/trace.go:171","msg":"trace[435805856] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:823; }","duration":"168.044703ms","start":"2024-09-23T10:21:45.913874Z","end":"2024-09-23T10:21:46.081919Z","steps":["trace[435805856] 'agreement among raft nodes before linearized reading'  (duration: 167.191781ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:21:48.505502Z","caller":"traceutil/trace.go:171","msg":"trace[337875417] linearizableReadLoop","detail":"{readStateIndex:874; appliedIndex:873; }","duration":"102.12531ms","start":"2024-09-23T10:21:48.403358Z","end":"2024-09-23T10:21:48.505483Z","steps":["trace[337875417] 'read index received'  (duration: 102.009164ms)","trace[337875417] 'applied index is now lower than readState.Index'  (duration: 115.586µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T10:21:48.505582Z","caller":"traceutil/trace.go:171","msg":"trace[2090749146] transaction","detail":"{read_only:false; response_revision:858; number_of_response:1; }","duration":"103.562838ms","start":"2024-09-23T10:21:48.402005Z","end":"2024-09-23T10:21:48.505568Z","steps":["trace[2090749146] 'process raft request'  (duration: 103.367681ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:21:48.505643Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.263667ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:21:48.505688Z","caller":"traceutil/trace.go:171","msg":"trace[347712081] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:858; }","duration":"102.323305ms","start":"2024-09-23T10:21:48.403354Z","end":"2024-09-23T10:21:48.505678Z","steps":["trace[347712081] 'agreement among raft nodes before linearized reading'  (duration: 102.229014ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:31:30.029497Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1691}
	{"level":"info","ts":"2024-09-23T10:31:30.053635Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1691,"took":"23.576898ms","hash":1617186798,"current-db-size-bytes":8450048,"current-db-size":"8.5 MB","current-db-size-in-use-bytes":4345856,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2024-09-23T10:31:30.053692Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1617186798,"revision":1691,"compact-revision":-1}
	
	
	==> gcp-auth [c300d56f5b85] <==
	2024/09/23 10:23:01 GCP Auth Webhook started!
	2024/09/23 10:23:18 Ready to marshal response ...
	2024/09/23 10:23:18 Ready to write response ...
	2024/09/23 10:23:19 Ready to marshal response ...
	2024/09/23 10:23:19 Ready to write response ...
	2024/09/23 10:23:41 Ready to marshal response ...
	2024/09/23 10:23:41 Ready to write response ...
	2024/09/23 10:23:41 Ready to marshal response ...
	2024/09/23 10:23:41 Ready to write response ...
	2024/09/23 10:23:41 Ready to marshal response ...
	2024/09/23 10:23:41 Ready to write response ...
	2024/09/23 10:31:54 Ready to marshal response ...
	2024/09/23 10:31:54 Ready to write response ...
	
	
	==> kernel <==
	 10:32:55 up 15 min,  0 users,  load average: 0.67, 0.52, 0.45
	Linux ubuntu-20-agent-14 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [e4810d0b22eb] <==
	W0923 10:22:18.838865       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.192.7:443: connect: connection refused
	W0923 10:22:19.940734       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.192.7:443: connect: connection refused
	W0923 10:22:27.204977       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.224.56:443: connect: connection refused
	E0923 10:22:27.205013       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.224.56:443: connect: connection refused" logger="UnhandledError"
	W0923 10:22:49.216113       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.224.56:443: connect: connection refused
	E0923 10:22:49.216157       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.224.56:443: connect: connection refused" logger="UnhandledError"
	W0923 10:22:49.223833       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.224.56:443: connect: connection refused
	E0923 10:22:49.223872       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.224.56:443: connect: connection refused" logger="UnhandledError"
	I0923 10:23:19.040942       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0923 10:23:19.058515       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0923 10:23:31.478429       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0923 10:23:31.491944       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0923 10:23:31.594148       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 10:23:31.610423       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0923 10:23:31.679679       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 10:23:31.813688       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0923 10:23:31.887723       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0923 10:23:31.914330       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0923 10:23:32.639302       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0923 10:23:32.670531       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0923 10:23:32.680769       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0923 10:23:32.868505       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0923 10:23:32.887887       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0923 10:23:32.914817       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0923 10:23:33.075675       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [7c0ce9c20225] <==
	W0923 10:31:43.167394       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:31:43.167435       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:31:46.114036       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:31:46.114083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:31:49.827875       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:31:49.827916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:31:50.460903       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:31:50.460943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:32:03.556165       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:32:03.556207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:32:07.178480       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:32:07.178536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:32:24.080503       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:32:24.080550       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:32:24.343264       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:32:24.343314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:32:35.718308       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:32:35.718362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:32:40.274545       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:32:40.274611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:32:41.055093       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:32:41.055142       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:32:46.510382       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:32:46.510421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:32:54.523327       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="14.962µs"
	
	
	==> kube-proxy [d4b4134082f2] <==
	I0923 10:21:39.802906       1 server_linux.go:66] "Using iptables proxy"
	I0923 10:21:39.972051       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.150.0.16"]
	E0923 10:21:39.972326       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:21:40.101058       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 10:21:40.101112       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:21:40.114341       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:21:40.114712       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:21:40.114740       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:21:40.116329       1 config.go:199] "Starting service config controller"
	I0923 10:21:40.116359       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:21:40.116407       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:21:40.116414       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:21:40.116407       1 config.go:328] "Starting node config controller"
	I0923 10:21:40.118507       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:21:40.216805       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:21:40.216885       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:21:40.219023       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d978992a060a] <==
	W0923 10:21:30.961743       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:21:30.961782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:30.961119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 10:21:30.961819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:30.961819       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 10:21:30.961856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:31.800175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 10:21:31.800218       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:31.877035       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 10:21:31.877077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:31.891653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 10:21:31.891696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:31.951064       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 10:21:31.951104       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:31.968057       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 10:21:31.968098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:31.998608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 10:21:31.998650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:32.085021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 10:21:32.085071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:32.116576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:21:32.116623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:21:32.138327       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:21:32.138371       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 10:21:34.657508       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Mon 2024-08-26 20:51:04 UTC, end at Mon 2024-09-23 10:32:55 UTC. --
	Sep 23 10:32:42 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:42.761287   15946 scope.go:117] "RemoveContainer" containerID="19bc9bdaffa6ca1785506c4e9a9ebb2a8ba015cf365fda6d059b5d3a6aec0814"
	Sep 23 10:32:42 ubuntu-20-agent-14 kubelet[15946]: E0923 10:32:42.761466   15946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-w2hzg_gadget(77a823d2-c0ec-4f9d-b418-c9bac6c68b52)\"" pod="gadget/gadget-w2hzg" podUID="77a823d2-c0ec-4f9d-b418-c9bac6c68b52"
	Sep 23 10:32:48 ubuntu-20-agent-14 kubelet[15946]: E0923 10:32:48.763823   15946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="ef59238e-5f40-4a5e-aae7-54d679f8081f"
	Sep 23 10:32:50 ubuntu-20-agent-14 kubelet[15946]: E0923 10:32:50.763330   15946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="5921de05-3259-4ce4-9d6c-d4a86d7540fa"
	Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.385008   15946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27x4z\" (UniqueName: \"kubernetes.io/projected/5921de05-3259-4ce4-9d6c-d4a86d7540fa-kube-api-access-27x4z\") pod \"5921de05-3259-4ce4-9d6c-d4a86d7540fa\" (UID: \"5921de05-3259-4ce4-9d6c-d4a86d7540fa\") "
	Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.385071   15946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5921de05-3259-4ce4-9d6c-d4a86d7540fa-gcp-creds\") pod \"5921de05-3259-4ce4-9d6c-d4a86d7540fa\" (UID: \"5921de05-3259-4ce4-9d6c-d4a86d7540fa\") "
	Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.385183   15946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5921de05-3259-4ce4-9d6c-d4a86d7540fa-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5921de05-3259-4ce4-9d6c-d4a86d7540fa" (UID: "5921de05-3259-4ce4-9d6c-d4a86d7540fa"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.387044   15946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5921de05-3259-4ce4-9d6c-d4a86d7540fa-kube-api-access-27x4z" (OuterVolumeSpecName: "kube-api-access-27x4z") pod "5921de05-3259-4ce4-9d6c-d4a86d7540fa" (UID: "5921de05-3259-4ce4-9d6c-d4a86d7540fa"). InnerVolumeSpecName "kube-api-access-27x4z". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.485626   15946 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5921de05-3259-4ce4-9d6c-d4a86d7540fa-gcp-creds\") on node \"ubuntu-20-agent-14\" DevicePath \"\""
	Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.485670   15946 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-27x4z\" (UniqueName: \"kubernetes.io/projected/5921de05-3259-4ce4-9d6c-d4a86d7540fa-kube-api-access-27x4z\") on node \"ubuntu-20-agent-14\" DevicePath \"\""
	Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.761163   15946 scope.go:117] "RemoveContainer" containerID="19bc9bdaffa6ca1785506c4e9a9ebb2a8ba015cf365fda6d059b5d3a6aec0814"
	Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.888525   15946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj4jr\" (UniqueName: \"kubernetes.io/projected/678aa223-edb6-4a6c-b3e5-5d95e0ea40f6-kube-api-access-mj4jr\") pod \"678aa223-edb6-4a6c-b3e5-5d95e0ea40f6\" (UID: \"678aa223-edb6-4a6c-b3e5-5d95e0ea40f6\") "
	Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.894587   15946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678aa223-edb6-4a6c-b3e5-5d95e0ea40f6-kube-api-access-mj4jr" (OuterVolumeSpecName: "kube-api-access-mj4jr") pod "678aa223-edb6-4a6c-b3e5-5d95e0ea40f6" (UID: "678aa223-edb6-4a6c-b3e5-5d95e0ea40f6"). InnerVolumeSpecName "kube-api-access-mj4jr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.989703   15946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtpmn\" (UniqueName: \"kubernetes.io/projected/35894a53-f7e8-4743-9eea-200f3986fcd6-kube-api-access-wtpmn\") pod \"35894a53-f7e8-4743-9eea-200f3986fcd6\" (UID: \"35894a53-f7e8-4743-9eea-200f3986fcd6\") "
	Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.989816   15946 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mj4jr\" (UniqueName: \"kubernetes.io/projected/678aa223-edb6-4a6c-b3e5-5d95e0ea40f6-kube-api-access-mj4jr\") on node \"ubuntu-20-agent-14\" DevicePath \"\""
	Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.992148   15946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35894a53-f7e8-4743-9eea-200f3986fcd6-kube-api-access-wtpmn" (OuterVolumeSpecName: "kube-api-access-wtpmn") pod "35894a53-f7e8-4743-9eea-200f3986fcd6" (UID: "35894a53-f7e8-4743-9eea-200f3986fcd6"). InnerVolumeSpecName "kube-api-access-wtpmn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:55.090290   15946 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wtpmn\" (UniqueName: \"kubernetes.io/projected/35894a53-f7e8-4743-9eea-200f3986fcd6-kube-api-access-wtpmn\") on node \"ubuntu-20-agent-14\" DevicePath \"\""
	Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:55.359324   15946 scope.go:117] "RemoveContainer" containerID="c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef"
	Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:55.378503   15946 scope.go:117] "RemoveContainer" containerID="c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef"
	Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: E0923 10:32:55.379325   15946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef" containerID="c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef"
	Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:55.379365   15946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef"} err="failed to get container status \"c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef\": rpc error: code = Unknown desc = Error response from daemon: No such container: c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef"
	Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:55.379391   15946 scope.go:117] "RemoveContainer" containerID="904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90"
	Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:55.399148   15946 scope.go:117] "RemoveContainer" containerID="904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90"
	Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: E0923 10:32:55.400117   15946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90" containerID="904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90"
	Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:55.400188   15946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90"} err="failed to get container status \"904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90\": rpc error: code = Unknown desc = Error response from daemon: No such container: 904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90"
	
	
	==> storage-provisioner [29d253e8d623] <==
	I0923 10:21:40.339278       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:21:40.349639       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:21:40.349676       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:21:40.360782       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:21:40.360971       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-14_a2863585-6319-4866-8b5f-dec1261c04ee!
	I0923 10:21:40.362124       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b8e306d8-c1db-43d5-8589-ceef2d9d7cac", APIVersion:"v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-14_a2863585-6319-4866-8b5f-dec1261c04ee became leader
	I0923 10:21:40.461157       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-14_a2863585-6319-4866-8b5f-dec1261c04ee!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-14/10.150.0.16
	Start Time:       Mon, 23 Sep 2024 10:23:41 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vrdnh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vrdnh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m15s                   default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-14
	  Normal   Pulling    7m49s (x4 over 9m14s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m49s (x4 over 9m14s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m49s (x4 over 9m14s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m36s (x6 over 9m14s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m14s (x20 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.01s)

                                                
                                    

Test pass (104/166)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.51
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 0.71
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.12
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.55
22 TestOffline 41.42
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 101.99
29 TestAddons/serial/Volcano 38.59
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 10.48
36 TestAddons/parallel/MetricsServer 5.4
38 TestAddons/parallel/CSI 40.35
39 TestAddons/parallel/Headlamp 14.91
40 TestAddons/parallel/CloudSpanner 5.27
42 TestAddons/parallel/NvidiaDevicePlugin 6.24
43 TestAddons/parallel/Yakd 11.45
44 TestAddons/StoppedEnableDisable 10.71
46 TestCertExpiration 228.83
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 31.2
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 25.58
61 TestFunctional/serial/KubeContext 0.05
62 TestFunctional/serial/KubectlGetPods 0.07
64 TestFunctional/serial/MinikubeKubectlCmd 0.11
65 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
66 TestFunctional/serial/ExtraConfig 37.64
67 TestFunctional/serial/ComponentHealth 0.07
68 TestFunctional/serial/LogsCmd 0.87
69 TestFunctional/serial/LogsFileCmd 0.92
70 TestFunctional/serial/InvalidService 4.75
72 TestFunctional/parallel/ConfigCmd 0.27
73 TestFunctional/parallel/DashboardCmd 8.42
74 TestFunctional/parallel/DryRun 0.17
75 TestFunctional/parallel/InternationalLanguage 0.09
76 TestFunctional/parallel/StatusCmd 0.43
79 TestFunctional/parallel/ProfileCmd/profile_not_create 0.22
80 TestFunctional/parallel/ProfileCmd/profile_list 0.2
81 TestFunctional/parallel/ProfileCmd/profile_json_output 0.21
83 TestFunctional/parallel/ServiceCmd/DeployApp 8.15
84 TestFunctional/parallel/ServiceCmd/List 0.34
85 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
86 TestFunctional/parallel/ServiceCmd/HTTPS 0.15
87 TestFunctional/parallel/ServiceCmd/Format 0.16
88 TestFunctional/parallel/ServiceCmd/URL 0.16
89 TestFunctional/parallel/ServiceCmdConnect 7.36
90 TestFunctional/parallel/AddonsCmd 0.12
91 TestFunctional/parallel/PersistentVolumeClaim 20.85
104 TestFunctional/parallel/MySQL 20.76
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 13.55
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 13.73
113 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/Version/short 0.04
118 TestFunctional/parallel/Version/components 0.39
119 TestFunctional/parallel/License 0.13
120 TestFunctional/delete_echo-server_images 0.03
121 TestFunctional/delete_my-image_image 0.02
122 TestFunctional/delete_minikube_cached_images 0.02
127 TestImageBuild/serial/Setup 15.07
128 TestImageBuild/serial/NormalBuild 1.13
129 TestImageBuild/serial/BuildWithBuildArg 0.71
130 TestImageBuild/serial/BuildWithDockerIgnore 0.49
131 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.46
135 TestJSONOutput/start/Command 26.65
136 TestJSONOutput/start/Audit 0
138 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/pause/Command 0.53
142 TestJSONOutput/pause/Audit 0
144 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
147 TestJSONOutput/unpause/Command 0.42
148 TestJSONOutput/unpause/Audit 0
150 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/stop/Command 10.43
154 TestJSONOutput/stop/Audit 0
156 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
158 TestErrorJSONOutput 0.2
163 TestMainNoArgs 0.05
164 TestMinikubeProfile 34.69
172 TestPause/serial/Start 27.91
173 TestPause/serial/SecondStartNoReconfiguration 34.66
174 TestPause/serial/Pause 0.52
175 TestPause/serial/VerifyStatus 0.14
176 TestPause/serial/Unpause 0.41
177 TestPause/serial/PauseAgain 0.55
178 TestPause/serial/DeletePaused 1.75
179 TestPause/serial/VerifyDeletedResources 0.07
193 TestRunningBinaryUpgrade 66.41
195 TestStoppedBinaryUpgrade/Setup 0.46
196 TestStoppedBinaryUpgrade/Upgrade 51.02
197 TestStoppedBinaryUpgrade/MinikubeLogs 0.87
198 TestKubernetesUpgrade 315.22
x
+
TestDownloadOnly/v1.20.0/json-events (8.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (8.509911928s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (62.089309ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:20:28
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:20:28.656828   10465 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:20:28.656924   10465 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:28.656929   10465 out.go:358] Setting ErrFile to fd 2...
	I0923 10:20:28.656940   10465 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:28.657109   10465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3689/.minikube/bin
	W0923 10:20:28.657224   10465 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19689-3689/.minikube/config/config.json: open /home/jenkins/minikube-integration/19689-3689/.minikube/config/config.json: no such file or directory
	I0923 10:20:28.657800   10465 out.go:352] Setting JSON to true
	I0923 10:20:28.658713   10465 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":176,"bootTime":1727086653,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:20:28.658821   10465 start.go:139] virtualization: kvm guest
	I0923 10:20:28.661812   10465 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 10:20:28.661973   10465 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19689-3689/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:20:28.662005   10465 notify.go:220] Checking for updates...
	I0923 10:20:28.663713   10465 out.go:169] MINIKUBE_LOCATION=19689
	I0923 10:20:28.665339   10465 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:20:28.666986   10465 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19689-3689/kubeconfig
	I0923 10:20:28.668614   10465 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3689/.minikube
	I0923 10:20:28.669889   10465 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0923 10:20:28.672358   10465 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 10:20:28.672596   10465 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:20:28.686484   10465 out.go:97] Using the none driver based on user configuration
	I0923 10:20:28.686514   10465 start.go:297] selected driver: none
	I0923 10:20:28.686521   10465 start.go:901] validating driver "none" against <nil>
	I0923 10:20:28.686545   10465 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	I0923 10:20:28.686915   10465 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:20:28.687405   10465 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0923 10:20:28.687552   10465 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 10:20:28.687580   10465 cni.go:84] Creating CNI manager for ""
	I0923 10:20:28.687631   10465 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 10:20:28.687670   10465 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0923 10:20:28.689282   10465 out.go:97] Starting "minikube" primary control-plane node in "minikube" cluster
	I0923 10:20:28.689633   10465 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/config.json ...
	I0923 10:20:28.689665   10465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/config.json: {Name:mk91c6775a53b295bfcd832a0223bb0435d503a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:20:28.689850   10465 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 10:20:28.690248   10465 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19689-3689/.minikube/cache/linux/amd64/v1.20.0/kubelet
	I0923 10:20:28.690254   10465 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19689-3689/.minikube/cache/linux/amd64/v1.20.0/kubectl
	I0923 10:20:28.690425   10465 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19689-3689/.minikube/cache/linux/amd64/v1.20.0/kubeadm
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (0.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
--- PASS: TestDownloadOnly/v1.31.1/json-events (0.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (57.000322ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:20:37
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:20:37.482773   10617 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:20:37.483040   10617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:37.483050   10617 out.go:358] Setting ErrFile to fd 2...
	I0923 10:20:37.483054   10617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:37.483263   10617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3689/.minikube/bin
	I0923 10:20:37.483883   10617 out.go:352] Setting JSON to true
	I0923 10:20:37.484766   10617 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":184,"bootTime":1727086653,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:20:37.484855   10617 start.go:139] virtualization: kvm guest
	I0923 10:20:37.486976   10617 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 10:20:37.487090   10617 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19689-3689/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:20:37.487131   10617 notify.go:220] Checking for updates...
	I0923 10:20:37.488565   10617 out.go:169] MINIKUBE_LOCATION=19689
	I0923 10:20:37.490025   10617 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:20:37.491320   10617 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19689-3689/kubeconfig
	I0923 10:20:37.494181   10617 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3689/.minikube
	I0923 10:20:37.495845   10617 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 10:20:38.706938   10453 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:44303 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (41.42s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (39.497775002s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.924245488s)
--- PASS: TestOffline (41.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (48.884826ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (48.118065ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (101.99s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm: (1m41.984951019s)
--- PASS: TestAddons/Setup (101.99s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.59s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 10.008649ms
addons_test.go:843: volcano-admission stabilized in 10.049868ms
addons_test.go:851: volcano-controller stabilized in 10.076111ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-qzpm6" [24af0972-4fa6-4790-9b14-0f90e94217fc] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003573671s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-jnhzg" [b15aab08-1785-4eb8-bd2f-10aeff7dde7f] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003401737s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-mt7n2" [57b24538-eede-4ad1-917c-d3918f45831b] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003799793s
addons_test.go:870: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [68867870-f0a4-497b-be09-c37e7a9f69c5] Pending
helpers_test.go:344: "test-job-nginx-0" [68867870-f0a4-497b-be09-c37e7a9f69c5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [68867870-f0a4-497b-be09-c37e7a9f69c5] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.00339359s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.239030478s)
--- PASS: TestAddons/serial/Volcano (38.59s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.48s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-w2hzg" [77a823d2-c0ec-4f9d-b418-c9bac6c68b52] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003808878s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.477711891s)
--- PASS: TestAddons/parallel/InspektorGadget (10.48s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.028028ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-nnrdh" [f57d2252-a248-4969-9111-da3afb4eebd3] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004114573s
addons_test.go:413: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I0923 10:33:11.887472   10453 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 10:33:11.891232   10453 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 10:33:11.891257   10453 kapi.go:107] duration metric: took 3.808518ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 3.818043ms
addons_test.go:508: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [913cdc90-c5cb-48e6-978f-7730a7b574f9] Pending
helpers_test.go:344: "task-pv-pod" [913cdc90-c5cb-48e6-978f-7730a7b574f9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [913cdc90-c5cb-48e6-978f-7730a7b574f9] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003423892s
addons_test.go:528: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [55c6fb1c-d310-498d-8c42-a0332624616c] Pending
helpers_test.go:344: "task-pv-pod-restore" [55c6fb1c-d310-498d-8c42-a0332624616c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [55c6fb1c-d310-498d-8c42-a0332624616c] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003480573s
addons_test.go:570: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.318204861s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.35s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-c8f9q" [8d01b9b0-a4ff-4d00-a3b4-d25ce2e50d1a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-c8f9q" [8d01b9b0-a4ff-4d00-a3b4-d25ce2e50d1a] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004710142s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.418359881s)
--- PASS: TestAddons/parallel/Headlamp (14.91s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-psstj" [41189895-16c7-4bdd-aa6d-8f33b27a4725] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003745555s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-t8s2p" [7c3d0947-5713-4d85-a7a0-09660e93cfcd] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003193633s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-kf48r" [68c7c4d5-8b19-4747-a17e-502675696e60] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003625702s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.441163397s)
--- PASS: TestAddons/parallel/Yakd (11.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.399697569s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.71s)

                                                
                                    
x
+
TestCertExpiration (228.83s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.845993392s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (32.224725619s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.75200913s)
--- PASS: TestCertExpiration (228.83s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19689-3689/.minikube/files/etc/test/nested/copy/10453/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (31.2s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (31.202954909s)
--- PASS: TestFunctional/serial/StartWithProxy (31.20s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 10:39:02.154026   10453 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (25.581272564s)
functional_test.go:663: soft start took 25.581888173s for "minikube" cluster.
I0923 10:39:27.735644   10453 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (25.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.637636403s)
functional_test.go:761: restart took 37.637774457s for "minikube" cluster.
I0923 10:40:05.701341   10453 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (37.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.87s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd1536137592/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.92s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.75s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (166.842647ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://10.150.0.16:31013 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context minikube delete -f testdata/invalidsvc.yaml: (1.393017409s)
--- PASS: TestFunctional/serial/InvalidService (4.75s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (41.549589ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (42.844555ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/23 10:40:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 45930: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (83.880582ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:40:21.043420   46301 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:40:21.043577   46301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:40:21.043588   46301 out.go:358] Setting ErrFile to fd 2...
	I0923 10:40:21.043595   46301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:40:21.043776   46301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3689/.minikube/bin
	I0923 10:40:21.044339   46301 out.go:352] Setting JSON to false
	I0923 10:40:21.045285   46301 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1368,"bootTime":1727086653,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:40:21.045390   46301 start.go:139] virtualization: kvm guest
	I0923 10:40:21.047940   46301 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:40:21.049358   46301 notify.go:220] Checking for updates...
	W0923 10:40:21.049334   46301 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19689-3689/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:40:21.049370   46301 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:40:21.050782   46301 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:40:21.052183   46301 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3689/kubeconfig
	I0923 10:40:21.053731   46301 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3689/.minikube
	I0923 10:40:21.055082   46301 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:40:21.056364   46301 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:40:21.058402   46301 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:40:21.058679   46301 exec_runner.go:51] Run: systemctl --version
	I0923 10:40:21.061652   46301 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:40:21.076526   46301 out.go:177] * Using the none driver based on existing profile
	I0923 10:40:21.077931   46301 start.go:297] selected driver: none
	I0923 10:40:21.077952   46301 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.150.0.16 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:40:21.078058   46301 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:40:21.078082   46301 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0923 10:40:21.078422   46301 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0923 10:40:21.080992   46301 out.go:201] 
	W0923 10:40:21.082349   46301 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 10:40:21.083761   46301 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (87.090299ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:40:21.213156   46332 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:40:21.213320   46332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:40:21.213332   46332 out.go:358] Setting ErrFile to fd 2...
	I0923 10:40:21.213338   46332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:40:21.213615   46332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3689/.minikube/bin
	I0923 10:40:21.214219   46332 out.go:352] Setting JSON to false
	I0923 10:40:21.215236   46332 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1368,"bootTime":1727086653,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:40:21.215372   46332 start.go:139] virtualization: kvm guest
	I0923 10:40:21.217659   46332 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0923 10:40:21.219158   46332 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19689-3689/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:40:21.219199   46332 notify.go:220] Checking for updates...
	I0923 10:40:21.219227   46332 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:40:21.221040   46332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:40:21.222840   46332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3689/kubeconfig
	I0923 10:40:21.224523   46332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3689/.minikube
	I0923 10:40:21.226262   46332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:40:21.228366   46332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:40:21.230553   46332 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:40:21.230893   46332 exec_runner.go:51] Run: systemctl --version
	I0923 10:40:21.233689   46332 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:40:21.246850   46332 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0923 10:40:21.248379   46332 start.go:297] selected driver: none
	I0923 10:40:21.248402   46332 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.150.0.16 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:40:21.248550   46332 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:40:21.248594   46332 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0923 10:40:21.248960   46332 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0923 10:40:21.251383   46332 out.go:201] 
	W0923 10:40:21.252683   46332 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 10:40:21.254178   46332 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "158.271532ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "44.13142ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "160.692809ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "46.402795ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-28lz2" [e663872b-717a-48ef-9f63-837e78f86ec3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-28lz2" [e663872b-717a-48ef-9f63-837e78f86ec3] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003017066s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "341.973054ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.150.0.16:32580
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.150.0.16:32580
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-6jxhn" [58ec39c6-3084-4b1d-8f5c-4a5fe47f7792] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-6jxhn" [58ec39c6-3084-4b1d-8f5c-4a5fe47f7792] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004270034s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.150.0.16:30657
functional_test.go:1675: http://10.150.0.16:30657: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-6jxhn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.150.0.16:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.150.0.16:30657
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.36s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d50442e3-70fd-4df3-a5d2-521ef868e79b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003549791s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a24af9a9-eb92-4704-915f-f74b7364d3d2] Pending
helpers_test.go:344: "sp-pod" [a24af9a9-eb92-4704-915f-f74b7364d3d2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a24af9a9-eb92-4704-915f-f74b7364d3d2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004003229s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml: (1.100413292s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [65db8e58-ccdf-4473-a31e-c153428af202] Pending
helpers_test.go:344: "sp-pod" [65db8e58-ccdf-4473-a31e-c153428af202] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003713764s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-h26mv" [e31ed1ff-15ce-41bc-a1de-9c232f5c4415] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-h26mv" [e31ed1ff-15ce-41bc-a1de-9c232f5c4415] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003641516s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-h26mv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-h26mv -- mysql -ppassword -e "show databases;": exit status 1 (115.897009ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 10:41:17.308040   10453 retry.go:31] will retry after 1.290805168s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-h26mv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-h26mv -- mysql -ppassword -e "show databases;": exit status 1 (115.222721ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 10:41:18.714827   10453 retry.go:31] will retry after 1.95964958s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-h26mv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.553988384s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (13.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.72961973s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (13.73s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.13s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (15.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (15.074116089s)
--- PASS: TestImageBuild/serial/Setup (15.07s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.127104566s)
--- PASS: TestImageBuild/serial/NormalBuild (1.13s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.71s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.49s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.49s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.46s)

                                                
                                    
x
+
TestJSONOutput/start/Command (26.65s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (26.6445191s)
--- PASS: TestJSONOutput/start/Command (26.65s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.42s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.42s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.426625021s)
--- PASS: TestJSONOutput/stop/Command (10.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.43478ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e106b458-2c85-441e-9cf8-38a456055489","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f043b6e9-2fb0-4bc1-82c8-034f89d593e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19689"}}
	{"specversion":"1.0","id":"90f336ec-eb0e-46aa-8e9a-669f2ccd8bf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1642e15d-fbec-42e7-876e-f134f0577c11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19689-3689/kubeconfig"}}
	{"specversion":"1.0","id":"dc2e7c85-2b86-42a0-8f02-3c442d793366","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3689/.minikube"}}
	{"specversion":"1.0","id":"fe6fea48-6b2f-443a-9d4d-acdf570e8b3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"14735bc1-f25f-4df8-8e34-d43d1f975dce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dda170b8-74ff-468c-b69a-d32e33dc4e83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (34.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.963081208s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (18.736421988s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.38131017s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (34.69s)

                                                
                                    
x
+
TestPause/serial/Start (27.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (27.911155187s)
--- PASS: TestPause/serial/Start (27.91s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.66s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (34.660836228s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.66s)

                                                
                                    
x
+
TestPause/serial/Pause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.52s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (136.064302ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.14s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.41s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.41s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.55s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.55s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.75s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.750588057s)
--- PASS: TestPause/serial/DeletePaused (1.75s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (66.41s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1294778768 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1294778768 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (27.209159692s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (35.456930946s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.22668802s)
--- PASS: TestRunningBinaryUpgrade (66.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (51.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1597310016 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1597310016 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.398991327s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1597310016 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1597310016 -p minikube stop: (23.7537298s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.86323682s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (51.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (315.22s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (27.736293029s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.385842648s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (81.387867ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m17.160010138s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (73.815177ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.327218803s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.390799622s)
--- PASS: TestKubernetesUpgrade (315.22s)

                                                
                                    

Test skip (61/166)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
37 TestAddons/parallel/Olm 0
41 TestAddons/parallel/LocalPath 0
45 TestCertOptions 0
47 TestDockerFlags 0
48 TestForceSystemdFlag 0
49 TestForceSystemdEnv 0
50 TestDockerEnvContainerd 0
51 TestKVMDriverInstallOrUpdate 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
54 TestErrorSpam 0
63 TestFunctional/serial/CacheCmd 0
77 TestFunctional/parallel/MountCmd 0
94 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
95 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
96 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
97 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
98 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
99 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
100 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
102 TestFunctional/parallel/SSHCmd 0
103 TestFunctional/parallel/CpCmd 0
105 TestFunctional/parallel/FileSync 0
106 TestFunctional/parallel/CertSync 0
111 TestFunctional/parallel/DockerEnv 0
112 TestFunctional/parallel/PodmanEnv 0
114 TestFunctional/parallel/ImageCommands 0
115 TestFunctional/parallel/NonActiveRuntimeDisabled 0
123 TestGvisorAddon 0
124 TestMultiControlPlane 0
132 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
159 TestKicCustomNetwork 0
160 TestKicExistingNetwork 0
161 TestKicCustomSubnet 0
162 TestKicStaticIP 0
165 TestMountStart 0
166 TestMultiNode 0
167 TestNetworkPlugins 0
168 TestNoKubernetes 0
169 TestChangeNoneUser 0
180 TestPreload 0
181 TestScheduledStopWindows 0
182 TestScheduledStopUnix 0
183 TestSkaffold 0
186 TestStartStop/group/old-k8s-version 0.14
187 TestStartStop/group/newest-cni 0.14
188 TestStartStop/group/default-k8s-diff-port 0.14
189 TestStartStop/group/no-preload 0.14
190 TestStartStop/group/disable-driver-mounts 0.14
191 TestStartStop/group/embed-certs 0.14
192 TestInsufficientStorage 0
199 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:194: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:916: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.14s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.14s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard