Test Report: none_Linux 19681

                    
                      58481425fd156c33d9cb9581f1bb301aacf19547:2024-09-25:36370
                    
                

Test fail (1/167)

Order failed test Duration
33 TestAddons/parallel/Registry 71.78
x
+
TestAddons/parallel/Registry (71.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.75892ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-hjbk7" [a9c82301-9560-4dd9-a31e-55bc04efd0e3] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00369895s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wxxhj" [532ef9f6-818b-4628-a77d-5cb0d7ae89b4] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004493227s
addons_test.go:338: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.083023569s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/25 18:41:50 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:39913               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:30 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 25 Sep 24 18:30 UTC | 25 Sep 24 18:30 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 25 Sep 24 18:30 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 25 Sep 24 18:30 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 25 Sep 24 18:30 UTC | 25 Sep 24 18:31 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 25 Sep 24 18:32 UTC | 25 Sep 24 18:32 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 25 Sep 24 18:41 UTC | 25 Sep 24 18:41 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 25 Sep 24 18:41 UTC | 25 Sep 24 18:41 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/25 18:30:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 18:30:15.413570   16451 out.go:345] Setting OutFile to fd 1 ...
	I0925 18:30:15.413674   16451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:30:15.413684   16451 out.go:358] Setting ErrFile to fd 2...
	I0925 18:30:15.413689   16451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:30:15.413871   16451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-5898/.minikube/bin
	I0925 18:30:15.414536   16451 out.go:352] Setting JSON to false
	I0925 18:30:15.415378   16451 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":757,"bootTime":1727288258,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 18:30:15.415466   16451 start.go:139] virtualization: kvm guest
	I0925 18:30:15.417954   16451 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0925 18:30:15.419367   16451 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19681-5898/.minikube/cache/preloaded-tarball: no such file or directory
	I0925 18:30:15.419393   16451 notify.go:220] Checking for updates...
	I0925 18:30:15.419420   16451 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 18:30:15.420848   16451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 18:30:15.422175   16451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19681-5898/kubeconfig
	I0925 18:30:15.423560   16451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-5898/.minikube
	I0925 18:30:15.424974   16451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 18:30:15.426266   16451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 18:30:15.427980   16451 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 18:30:15.439198   16451 out.go:177] * Using the none driver based on user configuration
	I0925 18:30:15.440425   16451 start.go:297] selected driver: none
	I0925 18:30:15.440436   16451 start.go:901] validating driver "none" against <nil>
	I0925 18:30:15.440447   16451 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 18:30:15.440503   16451 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0925 18:30:15.440824   16451 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0925 18:30:15.441379   16451 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0925 18:30:15.441615   16451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 18:30:15.441673   16451 cni.go:84] Creating CNI manager for ""
	I0925 18:30:15.441717   16451 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 18:30:15.441728   16451 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 18:30:15.441782   16451 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 18:30:15.443352   16451 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0925 18:30:15.444923   16451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/config.json ...
	I0925 18:30:15.444954   16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/config.json: {Name:mkebbdb915de43cc1f93c7d4941d4e2f5bdebd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:15.445067   16451 start.go:360] acquireMachinesLock for minikube: {Name:mk4b40feaa7a9ad6bd04907d48c7a40c739bd823 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 18:30:15.445098   16451 start.go:364] duration metric: took 16.826µs to acquireMachinesLock for "minikube"
	I0925 18:30:15.445111   16451 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 18:30:15.445170   16451 start.go:125] createHost starting for "" (driver="none")
	I0925 18:30:15.447357   16451 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0925 18:30:15.448638   16451 exec_runner.go:51] Run: systemctl --version
	I0925 18:30:15.451316   16451 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0925 18:30:15.451351   16451 client.go:168] LocalClient.Create starting
	I0925 18:30:15.451401   16451 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19681-5898/.minikube/certs/ca.pem
	I0925 18:30:15.451428   16451 main.go:141] libmachine: Decoding PEM data...
	I0925 18:30:15.451441   16451 main.go:141] libmachine: Parsing certificate...
	I0925 18:30:15.451493   16451 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19681-5898/.minikube/certs/cert.pem
	I0925 18:30:15.451513   16451 main.go:141] libmachine: Decoding PEM data...
	I0925 18:30:15.451521   16451 main.go:141] libmachine: Parsing certificate...
	I0925 18:30:15.451810   16451 client.go:171] duration metric: took 452.449µs to LocalClient.Create
	I0925 18:30:15.451831   16451 start.go:167] duration metric: took 517.6µs to libmachine.API.Create "minikube"
	I0925 18:30:15.451837   16451 start.go:293] postStartSetup for "minikube" (driver="none")
	I0925 18:30:15.451884   16451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 18:30:15.451913   16451 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 18:30:15.461920   16451 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0925 18:30:15.461973   16451 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0925 18:30:15.461997   16451 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0925 18:30:15.464163   16451 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0925 18:30:15.466102   16451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19681-5898/.minikube/addons for local assets ...
	I0925 18:30:15.466161   16451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19681-5898/.minikube/files for local assets ...
	I0925 18:30:15.466182   16451 start.go:296] duration metric: took 14.340253ms for postStartSetup
	I0925 18:30:15.466740   16451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/config.json ...
	I0925 18:30:15.466866   16451 start.go:128] duration metric: took 21.688387ms to createHost
	I0925 18:30:15.466883   16451 start.go:83] releasing machines lock for "minikube", held for 21.776148ms
	I0925 18:30:15.467176   16451 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0925 18:30:15.467315   16451 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0925 18:30:15.469203   16451 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 18:30:15.469244   16451 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 18:30:15.478926   16451 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0925 18:30:15.478957   16451 start.go:495] detecting cgroup driver to use...
	I0925 18:30:15.478987   16451 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0925 18:30:15.479075   16451 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 18:30:15.499748   16451 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0925 18:30:15.509821   16451 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 18:30:15.518843   16451 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 18:30:15.518908   16451 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 18:30:15.529118   16451 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 18:30:15.538348   16451 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 18:30:15.548024   16451 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 18:30:15.557720   16451 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 18:30:15.566261   16451 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 18:30:15.575282   16451 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0925 18:30:15.584608   16451 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0925 18:30:15.594289   16451 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 18:30:15.602640   16451 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 18:30:15.611672   16451 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0925 18:30:15.828656   16451 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0925 18:30:15.894820   16451 start.go:495] detecting cgroup driver to use...
	I0925 18:30:15.894869   16451 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0925 18:30:15.894986   16451 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 18:30:15.920077   16451 exec_runner.go:51] Run: which cri-dockerd
	I0925 18:30:15.921108   16451 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 18:30:15.929166   16451 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0925 18:30:15.929187   16451 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0925 18:30:15.929229   16451 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0925 18:30:15.936584   16451 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0925 18:30:15.936737   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2070867925 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0925 18:30:15.944632   16451 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0925 18:30:16.156748   16451 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0925 18:30:16.380931   16451 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 18:30:16.381067   16451 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0925 18:30:16.381084   16451 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0925 18:30:16.381130   16451 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0925 18:30:16.390141   16451 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0925 18:30:16.390278   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4080595848 /etc/docker/daemon.json
	I0925 18:30:16.398604   16451 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0925 18:30:16.602677   16451 exec_runner.go:51] Run: sudo systemctl restart docker
	I0925 18:30:16.918425   16451 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0925 18:30:16.929664   16451 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0925 18:30:16.945369   16451 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0925 18:30:16.955930   16451 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0925 18:30:17.174981   16451 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0925 18:30:17.379196   16451 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0925 18:30:17.588355   16451 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0925 18:30:17.602383   16451 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0925 18:30:17.612956   16451 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0925 18:30:17.817627   16451 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0925 18:30:17.884897   16451 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 18:30:17.884963   16451 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0925 18:30:17.886290   16451 start.go:563] Will wait 60s for crictl version
	I0925 18:30:17.886332   16451 exec_runner.go:51] Run: which crictl
	I0925 18:30:17.887320   16451 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0925 18:30:17.915372   16451 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0925 18:30:17.915435   16451 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0925 18:30:17.935882   16451 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0925 18:30:17.960939   16451 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0925 18:30:17.961010   16451 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0925 18:30:17.963980   16451 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0925 18:30:17.965377   16451 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0925 18:30:17.965489   16451 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0925 18:30:17.965502   16451 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0925 18:30:17.965576   16451 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0925 18:30:17.965615   16451 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0925 18:30:18.013942   16451 cni.go:84] Creating CNI manager for ""
	I0925 18:30:18.013972   16451 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 18:30:18.013986   16451 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0925 18:30:18.014014   16451 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 18:30:18.014166   16451 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 18:30:18.014237   16451 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0925 18:30:18.022441   16451 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0925 18:30:18.022501   16451 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0925 18:30:18.031187   16451 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0925 18:30:18.031197   16451 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0925 18:30:18.031235   16451 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0925 18:30:18.031254   16451 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0925 18:30:18.031264   16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0925 18:30:18.031295   16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0925 18:30:18.043139   16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0925 18:30:18.086651   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3351688123 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0925 18:30:18.089195   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3367296908 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0925 18:30:18.107665   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1272728177 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0925 18:30:18.178052   16451 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 18:30:18.186373   16451 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0925 18:30:18.186394   16451 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0925 18:30:18.186447   16451 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0925 18:30:18.194496   16451 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0925 18:30:18.194658   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1873495123 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0925 18:30:18.202901   16451 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0925 18:30:18.202926   16451 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0925 18:30:18.202964   16451 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0925 18:30:18.211824   16451 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 18:30:18.211955   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1913781820 /lib/systemd/system/kubelet.service
	I0925 18:30:18.219757   16451 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0925 18:30:18.219874   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3037954303 /var/tmp/minikube/kubeadm.yaml.new
	I0925 18:30:18.227853   16451 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0925 18:30:18.229146   16451 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0925 18:30:18.443538   16451 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0925 18:30:18.457414   16451 certs.go:68] Setting up /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube for IP: 10.138.0.48
	I0925 18:30:18.457442   16451 certs.go:194] generating shared ca certs ...
	I0925 18:30:18.457464   16451 certs.go:226] acquiring lock for ca certs: {Name:mk797a982dec8749bfea78088159640624c15ee6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:18.457631   16451 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19681-5898/.minikube/ca.key
	I0925 18:30:18.457692   16451 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19681-5898/.minikube/proxy-client-ca.key
	I0925 18:30:18.457711   16451 certs.go:256] generating profile certs ...
	I0925 18:30:18.457781   16451 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/client.key
	I0925 18:30:18.457798   16451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/client.crt with IP's: []
	I0925 18:30:18.618354   16451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/client.crt ...
	I0925 18:30:18.618393   16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/client.crt: {Name:mk81d2343b948c26787d0580b1c74f4e482d640f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:18.618526   16451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/client.key ...
	I0925 18:30:18.618536   16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/client.key: {Name:mk0fb46d259c03bb8651bceba9e8ef242d4f55a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:18.618609   16451 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0925 18:30:18.618623   16451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0925 18:30:18.877054   16451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0925 18:30:18.877084   16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk91734d19d868cb0d88642f455059ce2846399e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:18.877240   16451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0925 18:30:18.877253   16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mk352c45359716af5fc35e6c4c13021fc9cc05bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:18.877309   16451 certs.go:381] copying /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.crt
	I0925 18:30:18.877391   16451 certs.go:385] copying /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.key
	I0925 18:30:18.877441   16451 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.key
	I0925 18:30:18.877454   16451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0925 18:30:19.047846   16451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.crt ...
	I0925 18:30:19.047879   16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.crt: {Name:mka7af6981f8a2fac11d40c392a395138807970f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:19.048002   16451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.key ...
	I0925 18:30:19.048012   16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.key: {Name:mk5b6b2aebb127988bd38f1ca6fe09ccbc1125dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:19.048183   16451 certs.go:484] found cert: /home/jenkins/minikube-integration/19681-5898/.minikube/certs/ca-key.pem (1679 bytes)
	I0925 18:30:19.048215   16451 certs.go:484] found cert: /home/jenkins/minikube-integration/19681-5898/.minikube/certs/ca.pem (1082 bytes)
	I0925 18:30:19.048240   16451 certs.go:484] found cert: /home/jenkins/minikube-integration/19681-5898/.minikube/certs/cert.pem (1123 bytes)
	I0925 18:30:19.048264   16451 certs.go:484] found cert: /home/jenkins/minikube-integration/19681-5898/.minikube/certs/key.pem (1675 bytes)
	I0925 18:30:19.048891   16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 18:30:19.049001   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2420333353 /var/lib/minikube/certs/ca.crt
	I0925 18:30:19.057542   16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 18:30:19.057675   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2675950566 /var/lib/minikube/certs/ca.key
	I0925 18:30:19.065862   16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 18:30:19.066034   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2423427720 /var/lib/minikube/certs/proxy-client-ca.crt
	I0925 18:30:19.074663   16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0925 18:30:19.074785   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2568076541 /var/lib/minikube/certs/proxy-client-ca.key
	I0925 18:30:19.082531   16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0925 18:30:19.082656   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1739644477 /var/lib/minikube/certs/apiserver.crt
	I0925 18:30:19.091467   16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0925 18:30:19.091584   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1535756161 /var/lib/minikube/certs/apiserver.key
	I0925 18:30:19.099358   16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 18:30:19.099503   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube877895307 /var/lib/minikube/certs/proxy-client.crt
	I0925 18:30:19.108502   16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0925 18:30:19.108637   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1602453218 /var/lib/minikube/certs/proxy-client.key
	I0925 18:30:19.116168   16451 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0925 18:30:19.116188   16451 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0925 18:30:19.116226   16451 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0925 18:30:19.123696   16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 18:30:19.123826   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3127878063 /usr/share/ca-certificates/minikubeCA.pem
	I0925 18:30:19.131447   16451 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 18:30:19.131566   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3162351443 /var/lib/minikube/kubeconfig
	I0925 18:30:19.139327   16451 exec_runner.go:51] Run: openssl version
	I0925 18:30:19.142189   16451 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 18:30:19.150693   16451 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 18:30:19.151970   16451 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 25 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0925 18:30:19.152020   16451 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 18:30:19.154861   16451 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 18:30:19.162328   16451 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0925 18:30:19.163381   16451 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0925 18:30:19.163417   16451 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 18:30:19.163520   16451 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 18:30:19.179284   16451 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 18:30:19.188795   16451 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 18:30:19.198467   16451 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0925 18:30:19.221416   16451 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 18:30:19.230683   16451 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 18:30:19.230706   16451 kubeadm.go:157] found existing configuration files:
	
	I0925 18:30:19.230743   16451 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0925 18:30:19.239208   16451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0925 18:30:19.239274   16451 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0925 18:30:19.248222   16451 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0925 18:30:19.256534   16451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0925 18:30:19.256598   16451 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0925 18:30:19.263820   16451 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0925 18:30:19.271588   16451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0925 18:30:19.271640   16451 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0925 18:30:19.278784   16451 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0925 18:30:19.286504   16451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0925 18:30:19.286570   16451 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0925 18:30:19.293876   16451 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0925 18:30:19.328605   16451 kubeadm.go:310] W0925 18:30:19.328438   17334 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0925 18:30:19.329179   16451 kubeadm.go:310] W0925 18:30:19.329105   17334 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0925 18:30:19.330820   16451 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0925 18:30:19.330970   16451 kubeadm.go:310] [preflight] Running pre-flight checks
	I0925 18:30:19.425566   16451 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 18:30:19.425692   16451 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 18:30:19.425704   16451 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 18:30:19.425711   16451 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0925 18:30:19.436960   16451 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 18:30:19.439785   16451 out.go:235]   - Generating certificates and keys ...
	I0925 18:30:19.439829   16451 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0925 18:30:19.439842   16451 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0925 18:30:19.572806   16451 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0925 18:30:19.803906   16451 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0925 18:30:20.147115   16451 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0925 18:30:20.231179   16451 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0925 18:30:20.436139   16451 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0925 18:30:20.436276   16451 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0925 18:30:20.543302   16451 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0925 18:30:20.543334   16451 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0925 18:30:20.646090   16451 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0925 18:30:20.948624   16451 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0925 18:30:21.035183   16451 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0925 18:30:21.035335   16451 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 18:30:21.117747   16451 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 18:30:21.229183   16451 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0925 18:30:21.426035   16451 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 18:30:21.714766   16451 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 18:30:21.784437   16451 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 18:30:21.784986   16451 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 18:30:21.787213   16451 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 18:30:21.790394   16451 out.go:235]   - Booting up control plane ...
	I0925 18:30:21.790429   16451 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 18:30:21.790453   16451 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 18:30:21.790463   16451 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 18:30:21.812714   16451 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 18:30:21.817285   16451 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 18:30:21.817324   16451 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0925 18:30:22.055497   16451 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0925 18:30:22.055521   16451 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0925 18:30:22.557175   16451 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.674211ms
	I0925 18:30:22.557197   16451 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0925 18:30:27.059201   16451 kubeadm.go:310] [api-check] The API server is healthy after 4.501987322s
	I0925 18:30:27.071740   16451 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 18:30:27.082028   16451 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 18:30:27.098206   16451 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 18:30:27.098226   16451 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 18:30:27.104929   16451 kubeadm.go:310] [bootstrap-token] Using token: 5s7ik4.w0bongazummr9xfg
	I0925 18:30:27.106133   16451 out.go:235]   - Configuring RBAC rules ...
	I0925 18:30:27.106170   16451 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 18:30:27.108983   16451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 18:30:27.113924   16451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 18:30:27.116157   16451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 18:30:27.119331   16451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 18:30:27.121471   16451 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 18:30:27.465311   16451 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 18:30:27.888943   16451 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0925 18:30:28.466611   16451 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0925 18:30:28.467510   16451 kubeadm.go:310] 
	I0925 18:30:28.467525   16451 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0925 18:30:28.467529   16451 kubeadm.go:310] 
	I0925 18:30:28.467533   16451 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0925 18:30:28.467537   16451 kubeadm.go:310] 
	I0925 18:30:28.467541   16451 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0925 18:30:28.467544   16451 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 18:30:28.467548   16451 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 18:30:28.467552   16451 kubeadm.go:310] 
	I0925 18:30:28.467556   16451 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0925 18:30:28.467560   16451 kubeadm.go:310] 
	I0925 18:30:28.467564   16451 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 18:30:28.467568   16451 kubeadm.go:310] 
	I0925 18:30:28.467571   16451 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0925 18:30:28.467574   16451 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 18:30:28.467577   16451 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 18:30:28.467582   16451 kubeadm.go:310] 
	I0925 18:30:28.467586   16451 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 18:30:28.467590   16451 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0925 18:30:28.467594   16451 kubeadm.go:310] 
	I0925 18:30:28.467598   16451 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5s7ik4.w0bongazummr9xfg \
	I0925 18:30:28.467602   16451 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4435e8a35f1d2e45630b26d823949b5678baf780d841b33ba9758e14b1072a05 \
	I0925 18:30:28.467606   16451 kubeadm.go:310] 	--control-plane 
	I0925 18:30:28.467611   16451 kubeadm.go:310] 
	I0925 18:30:28.467614   16451 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0925 18:30:28.467619   16451 kubeadm.go:310] 
	I0925 18:30:28.467630   16451 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5s7ik4.w0bongazummr9xfg \
	I0925 18:30:28.467632   16451 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4435e8a35f1d2e45630b26d823949b5678baf780d841b33ba9758e14b1072a05 
	I0925 18:30:28.470413   16451 cni.go:84] Creating CNI manager for ""
	I0925 18:30:28.470439   16451 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 18:30:28.472167   16451 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 18:30:28.473505   16451 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0925 18:30:28.484343   16451 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0925 18:30:28.484500   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2018657651 /etc/cni/net.d/1-k8s.conflist
	I0925 18:30:28.496124   16451 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 18:30:28.496216   16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:28.496226   16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_25T18_30_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=cb9e6220ecbd737c1d09ad9630c6f144f437664a minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0925 18:30:28.505493   16451 ops.go:34] apiserver oom_adj: -16
	I0925 18:30:28.564741   16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:29.065404   16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:29.565096   16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:30.065723   16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:30.565845   16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:31.065434   16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:31.565391   16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:32.065706   16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:32.565139   16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:33.065627   16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 18:30:33.137849   16451 kubeadm.go:1113] duration metric: took 4.64169258s to wait for elevateKubeSystemPrivileges
	I0925 18:30:33.137890   16451 kubeadm.go:394] duration metric: took 13.974475789s to StartCluster
	I0925 18:30:33.137913   16451 settings.go:142] acquiring lock: {Name:mk8e263098eda0612fe68e9ecc518f6074bb016b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:33.137991   16451 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19681-5898/kubeconfig
	I0925 18:30:33.138769   16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/kubeconfig: {Name:mka83a0d132a79e910021e58a51d0487602d6da9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 18:30:33.139110   16451 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 18:30:33.139290   16451 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0925 18:30:33.139392   16451 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 18:30:33.139399   16451 addons.go:69] Setting yakd=true in profile "minikube"
	I0925 18:30:33.139416   16451 addons.go:234] Setting addon yakd=true in "minikube"
	I0925 18:30:33.139446   16451 host.go:66] Checking if "minikube" exists ...
	I0925 18:30:33.139696   16451 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0925 18:30:33.139714   16451 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0925 18:30:33.139741   16451 host.go:66] Checking if "minikube" exists ...
	I0925 18:30:33.140159   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:33.140174   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:33.140210   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:33.140286   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:33.140296   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:33.140323   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:33.140372   16451 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0925 18:30:33.140388   16451 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0925 18:30:33.140418   16451 host.go:66] Checking if "minikube" exists ...
	I0925 18:30:33.141144   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:33.141160   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:33.141192   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:33.141367   16451 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0925 18:30:33.141384   16451 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0925 18:30:33.141391   16451 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0925 18:30:33.141406   16451 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0925 18:30:33.141427   16451 host.go:66] Checking if "minikube" exists ...
	I0925 18:30:33.141444   16451 host.go:66] Checking if "minikube" exists ...
	I0925 18:30:33.142088   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:33.142104   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:33.142138   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:33.142319   16451 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0925 18:30:33.142317   16451 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0925 18:30:33.142364   16451 mustload.go:65] Loading cluster: minikube
	I0925 18:30:33.142378   16451 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0925 18:30:33.142410   16451 host.go:66] Checking if "minikube" exists ...
	I0925 18:30:33.142563   16451 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 18:30:33.143068   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:33.143083   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:33.143095   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:33.143111   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:33.143112   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:33.143142   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:33.143598   16451 out.go:177] * Configuring local host environment ...
	W0925 18:30:33.146773   16451 out.go:270] * 
	I0925 18:30:33.147559   16451 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0925 18:30:33.147580   16451 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0925 18:30:33.147611   16451 host.go:66] Checking if "minikube" exists ...
	I0925 18:30:33.147914   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:33.147933   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:33.147959   16451 addons.go:69] Setting volcano=true in profile "minikube"
	I0925 18:30:33.148001   16451 addons.go:69] Setting registry=true in profile "minikube"
	I0925 18:30:33.148025   16451 addons.go:234] Setting addon registry=true in "minikube"
	I0925 18:30:33.148051   16451 host.go:66] Checking if "minikube" exists ...
	I0925 18:30:33.148003   16451 addons.go:234] Setting addon volcano=true in "minikube"
	I0925 18:30:33.148261   16451 host.go:66] Checking if "minikube" exists ...
	I0925 18:30:33.148717   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:33.148738   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:33.148812   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:33.148877   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:33.148893   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:33.149033   16451 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0925 18:30:33.149062   16451 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0925 18:30:33.149097   16451 host.go:66] Checking if "minikube" exists ...
	I0925 18:30:33.149201   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:33.149400   16451 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0925 18:30:33.149423   16451 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0925 18:30:33.149792   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:33.149813   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:33.149849   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:33.149924   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:33.149941   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:33.149970   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:33.150161   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:33.150203   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:33.150245   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:33.147972   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:33.150581   16451 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0925 18:30:33.150602   16451 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	W0925 18:30:33.147980   16451 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0925 18:30:33.150700   16451 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0925 18:30:33.150712   16451 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0925 18:30:33.150722   16451 out.go:270] * 
	W0925 18:30:33.150836   16451 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0925 18:30:33.150968   16451 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0925 18:30:33.151011   16451 out.go:270] * 
	W0925 18:30:33.151046   16451 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0925 18:30:33.151173   16451 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0925 18:30:33.151200   16451 out.go:270] * 
	W0925 18:30:33.151251   16451 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0925 18:30:33.151356   16451 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 18:30:33.152827   16451 out.go:177] * Verifying Kubernetes components...
	I0925 18:30:33.158805   16451 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0925 18:30:33.161702   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:33.161730   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:33.161766   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:33.174262   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:33.174268   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:33.174406   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:33.175510   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:33.177960   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:33.191474   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:33.191907   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:33.193930   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:33.193988   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:33.195153   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:33.199342   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:33.201609   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:33.201658   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:33.201898   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:33.201950   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:33.202011   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:33.202030   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:33.202063   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:33.203368   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:33.211876   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:33.211935   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:33.212201   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:33.213803   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:33.213860   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:33.217956   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:33.217996   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:33.218493   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:33.218745   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:33.218796   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:33.220488   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:33.220534   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:33.222647   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:33.222668   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:33.224015   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:33.224055   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:33.226907   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:33.226923   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:33.229659   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:33.229671   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:33.229677   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:33.229688   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:33.230560   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:33.232579   16451 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 18:30:33.233204   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:33.233284   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:33.233906   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:33.234105   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:33.234210   16451 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 18:30:33.234233   16451 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0925 18:30:33.234241   16451 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 18:30:33.234276   16451 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 18:30:33.234547   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:33.234731   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:33.234911   16451 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0925 18:30:33.235491   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:33.234917   16451 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0925 18:30:33.235518   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:33.235993   16451 out.go:177]   - Using image docker.io/registry:2.8.3
	I0925 18:30:33.236787   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:33.236827   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:33.236946   16451 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0925 18:30:33.236979   16451 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0925 18:30:33.237290   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1939685709 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0925 18:30:33.237798   16451 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0925 18:30:33.237828   16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0925 18:30:33.237936   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1941677908 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0925 18:30:33.238324   16451 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0925 18:30:33.239437   16451 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0925 18:30:33.239439   16451 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0925 18:30:33.239559   16451 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0925 18:30:33.239682   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3643534877 /etc/kubernetes/addons/yakd-ns.yaml
	I0925 18:30:33.240047   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:33.240091   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:33.240785   16451 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0925 18:30:33.240820   16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0925 18:30:33.240931   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube114926721 /etc/kubernetes/addons/registry-rc.yaml
	I0925 18:30:33.241266   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:33.241306   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:33.242985   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:33.244427   16451 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0925 18:30:33.245706   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:33.245745   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:33.247789   16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 18:30:33.247926   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1092265232 /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 18:30:33.248975   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:33.249023   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:33.249562   16451 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0925 18:30:33.249597   16451 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0925 18:30:33.249719   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3056456401 /etc/kubernetes/addons/ig-namespace.yaml
	I0925 18:30:33.249993   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:33.250021   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:33.250670   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:33.252264   16451 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0925 18:30:33.252303   16451 host.go:66] Checking if "minikube" exists ...
	I0925 18:30:33.253042   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:33.253057   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:33.253088   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:33.253306   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:33.254759   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:33.254778   16451 host.go:66] Checking if "minikube" exists ...
	I0925 18:30:33.254864   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:33.254890   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:33.257914   16451 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0925 18:30:33.265875   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:33.268033   16451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0925 18:30:33.268308   16451 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0925 18:30:33.269258   16451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0925 18:30:33.269798   16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0925 18:30:33.270710   16451 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0925 18:30:33.272512   16451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0925 18:30:33.274554   16451 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0925 18:30:33.274596   16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0925 18:30:33.275763   16451 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0925 18:30:33.275794   16451 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0925 18:30:33.277158   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube556987342 /etc/kubernetes/addons/registry-svc.yaml
	I0925 18:30:33.284193   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:33.284218   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:33.284457   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:33.284481   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:33.284531   16451 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0925 18:30:33.284554   16451 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0925 18:30:33.285249   16451 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0925 18:30:33.285269   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2650464946 /etc/kubernetes/addons/yakd-sa.yaml
	I0925 18:30:33.285275   16451 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0925 18:30:33.285288   16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 18:30:33.285387   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3669827224 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0925 18:30:33.285415   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2335842946 /etc/kubernetes/addons/volcano-deployment.yaml
	I0925 18:30:33.285532   16451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0925 18:30:33.285548   16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0925 18:30:33.285637   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4231682905 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0925 18:30:33.285755   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:33.285774   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:33.286052   16451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0925 18:30:33.289341   16451 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0925 18:30:33.289739   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:33.292034   16451 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0925 18:30:33.293564   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:33.294357   16451 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0925 18:30:33.294651   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:33.294499   16451 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0925 18:30:33.294920   16451 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0925 18:30:33.295054   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1034850640 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0925 18:30:33.295733   16451 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0925 18:30:33.295777   16451 host.go:66] Checking if "minikube" exists ...
	I0925 18:30:33.296449   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:33.296468   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:33.296675   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:33.296692   16451 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0925 18:30:33.296716   16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0925 18:30:33.297325   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1762224384 /etc/kubernetes/addons/registry-proxy.yaml
	I0925 18:30:33.298761   16451 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0925 18:30:33.299310   16451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0925 18:30:33.301360   16451 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0925 18:30:33.301461   16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0925 18:30:33.301926   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3220111299 /etc/kubernetes/addons/deployment.yaml
	I0925 18:30:33.304768   16451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0925 18:30:33.311833   16451 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0925 18:30:33.311865   16451 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0925 18:30:33.312009   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3369207112 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0925 18:30:33.312721   16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0925 18:30:33.317073   16451 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0925 18:30:33.317109   16451 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0925 18:30:33.317225   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2898536084 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0925 18:30:33.319722   16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0925 18:30:33.322780   16451 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0925 18:30:33.322798   16451 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0925 18:30:33.322890   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube652790666 /etc/kubernetes/addons/ig-role.yaml
	I0925 18:30:33.322994   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:33.325959   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:33.331232   16451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0925 18:30:33.331260   16451 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0925 18:30:33.331383   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3962311012 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0925 18:30:33.331435   16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0925 18:30:33.332002   16451 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0925 18:30:33.332021   16451 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0925 18:30:33.332146   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2752837202 /etc/kubernetes/addons/yakd-crb.yaml
	I0925 18:30:33.341082   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:33.341148   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:33.343651   16451 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0925 18:30:33.343680   16451 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0925 18:30:33.343804   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1729627317 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0925 18:30:33.344013   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:33.344097   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:33.348230   16451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 18:30:33.348257   16451 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0925 18:30:33.348377   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3115841768 /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 18:30:33.356756   16451 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0925 18:30:33.356846   16451 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0925 18:30:33.356978   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3554861798 /etc/kubernetes/addons/yakd-svc.yaml
	I0925 18:30:33.360804   16451 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0925 18:30:33.360836   16451 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0925 18:30:33.361005   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3043921755 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0925 18:30:33.363785   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:33.363822   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:33.368101   16451 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 18:30:33.369543   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:33.369593   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:33.369812   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:33.369858   16451 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 18:30:33.369874   16451 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0925 18:30:33.369881   16451 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0925 18:30:33.369923   16451 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0925 18:30:33.371773   16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 18:30:33.374211   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:33.376295   16451 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0925 18:30:33.377754   16451 out.go:177]   - Using image docker.io/busybox:stable
	I0925 18:30:33.379201   16451 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0925 18:30:33.379230   16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0925 18:30:33.379369   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4086388198 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0925 18:30:33.380060   16451 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0925 18:30:33.380065   16451 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0925 18:30:33.380086   16451 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0925 18:30:33.380091   16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0925 18:30:33.380221   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3286914333 /etc/kubernetes/addons/yakd-dp.yaml
	I0925 18:30:33.380230   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3148317559 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0925 18:30:33.381395   16451 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0925 18:30:33.381421   16451 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0925 18:30:33.381531   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3314067596 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0925 18:30:33.383629   16451 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 18:30:33.383753   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4272221833 /etc/kubernetes/addons/storageclass.yaml
	I0925 18:30:33.401537   16451 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0925 18:30:33.401608   16451 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0925 18:30:33.401626   16451 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0925 18:30:33.401754   16451 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0925 18:30:33.402797   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube913712496 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0925 18:30:33.402875   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2073121036 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0925 18:30:33.430386   16451 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0925 18:30:33.430431   16451 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0925 18:30:33.430579   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1336820379 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0925 18:30:33.432590   16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0925 18:30:33.436641   16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0925 18:30:33.443547   16451 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0925 18:30:33.443608   16451 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0925 18:30:33.443764   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3690746514 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0925 18:30:33.445617   16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 18:30:33.446976   16451 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0925 18:30:33.447014   16451 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0925 18:30:33.447180   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2071725817 /etc/kubernetes/addons/ig-crd.yaml
	I0925 18:30:33.471342   16451 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0925 18:30:33.471388   16451 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0925 18:30:33.471519   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2576139560 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0925 18:30:33.523722   16451 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0925 18:30:33.523759   16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0925 18:30:33.523886   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1762273621 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0925 18:30:33.595131   16451 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 18:30:33.595163   16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0925 18:30:33.595289   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3385165694 /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 18:30:33.596893   16451 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0925 18:30:33.607923   16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0925 18:30:33.621789   16451 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0925 18:30:33.621834   16451 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0925 18:30:33.621984   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2508943280 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0925 18:30:33.659245   16451 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0925 18:30:33.669638   16451 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0925 18:30:33.669664   16451 node_ready.go:38] duration metric: took 10.387108ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0925 18:30:33.669677   16451 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 18:30:33.687628   16451 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mgqmr" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:33.762737   16451 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0925 18:30:33.762776   16451 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0925 18:30:33.762933   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1967250124 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0925 18:30:33.763788   16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 18:30:33.885686   16451 addons.go:475] Verifying addon registry=true in "minikube"
	I0925 18:30:33.887628   16451 out.go:177] * Verifying registry addon...
	I0925 18:30:33.896383   16451 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0925 18:30:33.910809   16451 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0925 18:30:33.910852   16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0925 18:30:33.911665   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube208056867 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0925 18:30:33.914840   16451 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0925 18:30:33.914869   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:34.070693   16451 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0925 18:30:34.070738   16451 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0925 18:30:34.070878   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1397583527 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0925 18:30:34.093544   16451 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0925 18:30:34.106027   16451 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0925 18:30:34.106065   16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0925 18:30:34.106202   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1983209738 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0925 18:30:34.330810   16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.045484746s)
	I0925 18:30:34.373046   16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.001226211s)
	I0925 18:30:34.373081   16451 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0925 18:30:34.378844   16451 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0925 18:30:34.378881   16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0925 18:30:34.379019   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1702314943 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0925 18:30:34.400871   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:34.422755   16451 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0925 18:30:34.422788   16451 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0925 18:30:34.422934   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2340578506 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0925 18:30:34.456706   16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0925 18:30:34.513395   16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.080736575s)
	I0925 18:30:34.525092   16451 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0925 18:30:34.600867   16451 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0925 18:30:34.832021   16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.068176903s)
	I0925 18:30:34.855999   16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.419315225s)
	I0925 18:30:34.901634   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:35.415782   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:35.476541   16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.868517254s)
	W0925 18:30:35.476596   16451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0925 18:30:35.476627   16451 retry.go:31] will retry after 311.413631ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0925 18:30:35.695191   16451 pod_ready.go:103] pod "coredns-7c65d6cfc9-mgqmr" in "kube-system" namespace has status "Ready":"False"
	I0925 18:30:35.788442   16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0925 18:30:35.911035   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:36.348836   16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.036070958s)
	I0925 18:30:36.399907   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:36.576229   16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.119461859s)
	I0925 18:30:36.576323   16451 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0925 18:30:36.579030   16451 out.go:177] * Verifying csi-hostpath-driver addon...
	I0925 18:30:36.581745   16451 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0925 18:30:36.586695   16451 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0925 18:30:36.586721   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:36.921937   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:37.088172   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:37.401251   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:37.586556   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:37.917509   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:38.087190   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:38.193784   16451 pod_ready.go:93] pod "coredns-7c65d6cfc9-mgqmr" in "kube-system" namespace has status "Ready":"True"
	I0925 18:30:38.193808   16451 pod_ready.go:82] duration metric: took 4.5060921s for pod "coredns-7c65d6cfc9-mgqmr" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:38.193840   16451 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pk25b" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:38.402079   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:38.587520   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:38.712716   16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.924222348s)
	I0925 18:30:38.900372   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:39.086506   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:39.400856   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:39.587166   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:39.900692   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:40.088382   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:40.200260   16451 pod_ready.go:103] pod "coredns-7c65d6cfc9-pk25b" in "kube-system" namespace has status "Ready":"False"
	I0925 18:30:40.262235   16451 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0925 18:30:40.262381   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2012514082 /var/lib/minikube/google_application_credentials.json
	I0925 18:30:40.272537   16451 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0925 18:30:40.272676   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1109561959 /var/lib/minikube/google_cloud_project
	I0925 18:30:40.283829   16451 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0925 18:30:40.283916   16451 host.go:66] Checking if "minikube" exists ...
	I0925 18:30:40.284669   16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0925 18:30:40.284694   16451 api_server.go:166] Checking apiserver status ...
	I0925 18:30:40.284730   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:40.302688   16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
	I0925 18:30:40.313184   16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
	I0925 18:30:40.313240   16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
	I0925 18:30:40.322825   16451 api_server.go:204] freezer state: "THAWED"
	I0925 18:30:40.322856   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:40.326387   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:40.326464   16451 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0925 18:30:40.363360   16451 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0925 18:30:40.399700   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:40.521768   16451 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0925 18:30:40.544448   16451 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0925 18:30:40.544518   16451 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0925 18:30:40.544695   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube384694292 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0925 18:30:40.554885   16451 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0925 18:30:40.554915   16451 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0925 18:30:40.555017   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1039535344 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0925 18:30:40.565411   16451 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 18:30:40.565438   16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0925 18:30:40.565541   16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2115557671 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 18:30:40.576440   16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 18:30:40.586428   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:41.006444   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:41.088172   16451 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0925 18:30:41.089763   16451 out.go:177] * Verifying gcp-auth addon...
	I0925 18:30:41.092100   16451 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0925 18:30:41.112891   16451 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0925 18:30:41.115789   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:41.199169   16451 pod_ready.go:98] pod "coredns-7c65d6cfc9-pk25b" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.48 HostIPs:[{IP:10.138.0.48}]
PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-25 18:30:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-25 18:30:34 +0000 UTC,FinishedAt:2024-09-25 18:30:40 +0000 UTC,ContainerID:docker://b4f8772adf204b5202d7773ae703fc4274d3be3554c4d78f6f3416d87df07fcc,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://b4f8772adf204b5202d7773ae703fc4274d3be3554c4d78f6f3416d87df07fcc Started:0xc00224a3d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0029a2190} {Name:kube-api-access-l6btb MountPath:/var/run/secrets/kubernetes.io/serviceaccount R
eadOnly:true RecursiveReadOnly:0xc0029a21a0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0925 18:30:41.199205   16451 pod_ready.go:82] duration metric: took 3.005354512s for pod "coredns-7c65d6cfc9-pk25b" in "kube-system" namespace to be "Ready" ...
	E0925 18:30:41.199220   16451 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-pk25b" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.
48 HostIPs:[{IP:10.138.0.48}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-25 18:30:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-25 18:30:34 +0000 UTC,FinishedAt:2024-09-25 18:30:40 +0000 UTC,ContainerID:docker://b4f8772adf204b5202d7773ae703fc4274d3be3554c4d78f6f3416d87df07fcc,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://b4f8772adf204b5202d7773ae703fc4274d3be3554c4d78f6f3416d87df07fcc Started:0xc00224a3d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0029a2190} {Name:kube-api-access-l6btb MountPath:/var/run/secrets/k
ubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0029a21a0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0925 18:30:41.199231   16451 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:41.203663   16451 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0925 18:30:41.203684   16451 pod_ready.go:82] duration metric: took 4.44637ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:41.203693   16451 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:41.207717   16451 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0925 18:30:41.207735   16451 pod_ready.go:82] duration metric: took 4.035155ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:41.207743   16451 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:41.211714   16451 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0925 18:30:41.211737   16451 pod_ready.go:82] duration metric: took 3.987863ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:41.211746   16451 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ms7l" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:41.218481   16451 pod_ready.go:93] pod "kube-proxy-5ms7l" in "kube-system" namespace has status "Ready":"True"
	I0925 18:30:41.218505   16451 pod_ready.go:82] duration metric: took 6.752288ms for pod "kube-proxy-5ms7l" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:41.218518   16451 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:41.400368   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:41.586150   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:41.597209   16451 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0925 18:30:41.597229   16451 pod_ready.go:82] duration metric: took 378.704419ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:41.597239   16451 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-spbz5" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:41.899772   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:42.086780   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:42.397274   16451 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-spbz5" in "kube-system" namespace has status "Ready":"True"
	I0925 18:30:42.397297   16451 pod_ready.go:82] duration metric: took 800.049841ms for pod "nvidia-device-plugin-daemonset-spbz5" in "kube-system" namespace to be "Ready" ...
	I0925 18:30:42.397310   16451 pod_ready.go:39] duration metric: took 8.727620075s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 18:30:42.397332   16451 api_server.go:52] waiting for apiserver process to appear ...
	I0925 18:30:42.397386   16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 18:30:42.399575   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:42.414293   16451 api_server.go:72] duration metric: took 9.26288983s to wait for apiserver process to appear ...
	I0925 18:30:42.414320   16451 api_server.go:88] waiting for apiserver healthz status ...
	I0925 18:30:42.414343   16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0925 18:30:42.418653   16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0925 18:30:42.419541   16451 api_server.go:141] control plane version: v1.31.1
	I0925 18:30:42.419561   16451 api_server.go:131] duration metric: took 5.234825ms to wait for apiserver health ...
	I0925 18:30:42.419587   16451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 18:30:42.587132   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:42.601659   16451 system_pods.go:59] 16 kube-system pods found
	I0925 18:30:42.601689   16451 system_pods.go:61] "coredns-7c65d6cfc9-mgqmr" [e6ab0f25-7fe6-4b26-9d11-32ff30994e10] Running
	I0925 18:30:42.601697   16451 system_pods.go:61] "csi-hostpath-attacher-0" [ab781cd0-df2a-4298-a481-a690f95ef7f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0925 18:30:42.601703   16451 system_pods.go:61] "csi-hostpath-resizer-0" [87eb9560-2857-4f0a-8447-a8a0946867ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0925 18:30:42.601711   16451 system_pods.go:61] "csi-hostpathplugin-8xdw7" [79f4ecb7-c8cd-40ae-8312-bcb3b705657c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0925 18:30:42.601715   16451 system_pods.go:61] "etcd-ubuntu-20-agent-2" [6b5e6912-9880-49db-9181-4908a70236c1] Running
	I0925 18:30:42.601719   16451 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [8ee32a0c-c941-40b4-bc62-85f1fda3283b] Running
	I0925 18:30:42.601723   16451 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [382a05ae-5914-42c3-af8a-f59edc16429c] Running
	I0925 18:30:42.601727   16451 system_pods.go:61] "kube-proxy-5ms7l" [d5e49d6f-dfb6-41fb-9873-45b6e0ce1470] Running
	I0925 18:30:42.601733   16451 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [aa16179e-57f0-4ba5-8202-99f4be614808] Running
	I0925 18:30:42.601738   16451 system_pods.go:61] "metrics-server-84c5f94fbc-5lrcn" [5acd34ff-05b5-4944-98e6-9578b96dd661] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 18:30:42.601745   16451 system_pods.go:61] "nvidia-device-plugin-daemonset-spbz5" [bb5284d5-47c1-4dd6-8540-374b1dd30ffb] Running
	I0925 18:30:42.601751   16451 system_pods.go:61] "registry-66c9cd494c-hjbk7" [a9c82301-9560-4dd9-a31e-55bc04efd0e3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0925 18:30:42.601756   16451 system_pods.go:61] "registry-proxy-wxxhj" [532ef9f6-818b-4628-a77d-5cb0d7ae89b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0925 18:30:42.601765   16451 system_pods.go:61] "snapshot-controller-56fcc65765-plm79" [fe1144db-150a-4079-b701-d1f55f2e4c2d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0925 18:30:42.601773   16451 system_pods.go:61] "snapshot-controller-56fcc65765-smq7t" [614d5a58-5053-4c5a-a413-b760a6578a07] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0925 18:30:42.601778   16451 system_pods.go:61] "storage-provisioner" [c5852ab1-5e02-4a46-8de1-98c989aa3d17] Running
	I0925 18:30:42.601787   16451 system_pods.go:74] duration metric: took 182.195666ms to wait for pod list to return data ...
	I0925 18:30:42.601796   16451 default_sa.go:34] waiting for default service account to be created ...
	I0925 18:30:42.798177   16451 default_sa.go:45] found service account: "default"
	I0925 18:30:42.798201   16451 default_sa.go:55] duration metric: took 196.39941ms for default service account to be created ...
	I0925 18:30:42.798209   16451 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 18:30:42.900625   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:43.003136   16451 system_pods.go:86] 16 kube-system pods found
	I0925 18:30:43.003170   16451 system_pods.go:89] "coredns-7c65d6cfc9-mgqmr" [e6ab0f25-7fe6-4b26-9d11-32ff30994e10] Running
	I0925 18:30:43.003183   16451 system_pods.go:89] "csi-hostpath-attacher-0" [ab781cd0-df2a-4298-a481-a690f95ef7f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0925 18:30:43.003192   16451 system_pods.go:89] "csi-hostpath-resizer-0" [87eb9560-2857-4f0a-8447-a8a0946867ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0925 18:30:43.003203   16451 system_pods.go:89] "csi-hostpathplugin-8xdw7" [79f4ecb7-c8cd-40ae-8312-bcb3b705657c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0925 18:30:43.003213   16451 system_pods.go:89] "etcd-ubuntu-20-agent-2" [6b5e6912-9880-49db-9181-4908a70236c1] Running
	I0925 18:30:43.003220   16451 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [8ee32a0c-c941-40b4-bc62-85f1fda3283b] Running
	I0925 18:30:43.003229   16451 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [382a05ae-5914-42c3-af8a-f59edc16429c] Running
	I0925 18:30:43.003235   16451 system_pods.go:89] "kube-proxy-5ms7l" [d5e49d6f-dfb6-41fb-9873-45b6e0ce1470] Running
	I0925 18:30:43.003243   16451 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [aa16179e-57f0-4ba5-8202-99f4be614808] Running
	I0925 18:30:43.003253   16451 system_pods.go:89] "metrics-server-84c5f94fbc-5lrcn" [5acd34ff-05b5-4944-98e6-9578b96dd661] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 18:30:43.003261   16451 system_pods.go:89] "nvidia-device-plugin-daemonset-spbz5" [bb5284d5-47c1-4dd6-8540-374b1dd30ffb] Running
	I0925 18:30:43.003271   16451 system_pods.go:89] "registry-66c9cd494c-hjbk7" [a9c82301-9560-4dd9-a31e-55bc04efd0e3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0925 18:30:43.003282   16451 system_pods.go:89] "registry-proxy-wxxhj" [532ef9f6-818b-4628-a77d-5cb0d7ae89b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0925 18:30:43.003294   16451 system_pods.go:89] "snapshot-controller-56fcc65765-plm79" [fe1144db-150a-4079-b701-d1f55f2e4c2d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0925 18:30:43.003303   16451 system_pods.go:89] "snapshot-controller-56fcc65765-smq7t" [614d5a58-5053-4c5a-a413-b760a6578a07] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0925 18:30:43.003314   16451 system_pods.go:89] "storage-provisioner" [c5852ab1-5e02-4a46-8de1-98c989aa3d17] Running
	I0925 18:30:43.003324   16451 system_pods.go:126] duration metric: took 205.108575ms to wait for k8s-apps to be running ...
	I0925 18:30:43.003335   16451 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 18:30:43.003394   16451 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0925 18:30:43.019035   16451 system_svc.go:56] duration metric: took 15.687045ms WaitForService to wait for kubelet
	I0925 18:30:43.019066   16451 kubeadm.go:582] duration metric: took 9.867676746s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 18:30:43.019086   16451 node_conditions.go:102] verifying NodePressure condition ...
	I0925 18:30:43.086315   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:43.197637   16451 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0925 18:30:43.197669   16451 node_conditions.go:123] node cpu capacity is 8
	I0925 18:30:43.197683   16451 node_conditions.go:105] duration metric: took 178.591645ms to run NodePressure ...
	I0925 18:30:43.197698   16451 start.go:241] waiting for startup goroutines ...
	I0925 18:30:43.399806   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:43.586530   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:43.900333   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:44.086004   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:44.400666   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:44.585831   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:44.900091   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:45.085909   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:45.400343   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:45.586450   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:45.899780   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:46.087440   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:46.400214   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:46.585782   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:46.901211   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:47.086978   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:47.399664   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:47.586835   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:47.900847   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:48.086798   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:48.401253   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:48.586397   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:48.899930   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:49.086589   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:49.399666   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:49.586203   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:49.900691   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 18:30:50.086362   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:50.400399   16451 kapi.go:107] duration metric: took 16.504031233s to wait for kubernetes.io/minikube-addons=registry ...
	I0925 18:30:50.586330   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:51.086545   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:51.586860   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:52.086817   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:52.586279   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:53.086239   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:53.629639   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:54.085309   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:54.586129   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:55.086691   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:55.586932   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:56.086559   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:56.586461   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:57.086875   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:57.586312   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:58.086404   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:58.586528   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:59.085287   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:30:59.586449   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:00.086130   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:00.587474   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:01.086369   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:01.585875   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:02.087049   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:02.585712   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:03.086805   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:03.586315   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:04.086086   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:04.586332   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:05.086485   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:05.586589   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:06.086598   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:06.586718   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:07.087132   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:07.586595   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:08.086828   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:08.587140   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:09.087029   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:09.586459   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:10.086701   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:10.586261   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:11.086520   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:11.586679   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:12.086461   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 18:31:12.588659   16451 kapi.go:107] duration metric: took 36.006912472s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0925 18:31:22.595405   16451 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0925 18:31:22.595426   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:23.095492   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:23.595620   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:24.095355   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:24.595487   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:25.095234   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:25.595709   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:26.096030   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:26.595208   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:27.095688   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:27.595873   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:28.096239   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:28.595342   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:29.095525   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:29.595810   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:30.096274   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:30.596578   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:31.095487   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:31.595405   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:32.095342   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:32.595175   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:33.094845   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:33.594999   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:34.095122   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:34.595185   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:35.094838   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:35.595437   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:36.095549   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:36.595475   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:37.095409   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:37.595187   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:38.095488   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:38.595435   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:39.095388   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:39.595451   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:40.095719   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:40.596454   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:41.095217   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:41.595253   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:42.095450   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:42.595359   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:43.095211   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:43.595301   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:44.095610   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:44.595777   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:45.095647   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:45.633259   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:46.095076   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:46.594693   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:47.096540   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:47.595385   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:48.095713   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:48.596247   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:49.095323   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:49.595466   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:50.096089   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:50.596910   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:51.096466   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:51.595556   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:52.095331   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:52.595165   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:53.094921   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:53.595125   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:54.095336   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:54.595218   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:55.094994   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:55.596076   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:56.096173   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:56.595329   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:57.095667   16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 18:31:57.595949   16451 kapi.go:107] duration metric: took 1m16.503849056s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0925 18:31:57.597642   16451 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0925 18:31:57.598974   16451 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0925 18:31:57.600212   16451 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0925 18:31:57.601516   16451 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, metrics-server, yakd, inspektor-gadget, storage-provisioner-rancher, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0925 18:31:57.602928   16451 addons.go:510] duration metric: took 1m24.463638703s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner metrics-server yakd inspektor-gadget storage-provisioner-rancher volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0925 18:31:57.602969   16451 start.go:246] waiting for cluster config update ...
	I0925 18:31:57.602984   16451 start.go:255] writing updated cluster config ...
	I0925 18:31:57.603220   16451 exec_runner.go:51] Run: rm -f paused
	I0925 18:31:57.646747   16451 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0925 18:31:57.648604   16451 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Fri 2024-08-16 02:18:09 UTC, end at Wed 2024-09-25 18:41:51 UTC. --
	Sep 25 18:34:10 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:34:10.029764857Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=44905079cd82c12f traceID=4b30a805247d77635d593f3c15582d70
	Sep 25 18:34:10 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:34:10.031835361Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=44905079cd82c12f traceID=4b30a805247d77635d593f3c15582d70
	Sep 25 18:34:11 ubuntu-20-agent-2 cri-dockerd[16995]: time="2024-09-25T18:34:11Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 25 18:34:12 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:34:12.455491143Z" level=info msg="ignoring event" container=4690f8e026dd1169f9c4eec417441ba3c0580aa39970920f7d9655fb78ca7a0d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:35:32 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:35:32.023861114Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=4c6ac7a3b758f898 traceID=9b7fab46f0dfdacfe093200bad8a9f5f
	Sep 25 18:35:32 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:35:32.025960844Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=4c6ac7a3b758f898 traceID=9b7fab46f0dfdacfe093200bad8a9f5f
	Sep 25 18:36:59 ubuntu-20-agent-2 cri-dockerd[16995]: time="2024-09-25T18:36:59Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 25 18:37:00 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:37:00.358711022Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 25 18:37:00 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:37:00.358711917Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 25 18:37:00 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:37:00.360536008Z" level=error msg="Error running exec 344599b9225df211c40d9fca1664129211e67b280541e966b6649781a91b7719 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=cac79247b65b7d3f traceID=5c895f2fb3dfb1ff872ca7c9851e2ebc
	Sep 25 18:37:00 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:37:00.547427980Z" level=info msg="ignoring event" container=f67c8f5ba20d6e3d3737a5f718559c84a0d3a952befbebadf13922595920fc5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:38:25 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:38:25.030437812Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=11679319bd599959 traceID=d8422ab2070c00277f3307a23d14b3e9
	Sep 25 18:38:25 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:38:25.032445151Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=11679319bd599959 traceID=d8422ab2070c00277f3307a23d14b3e9
	Sep 25 18:40:50 ubuntu-20-agent-2 cri-dockerd[16995]: time="2024-09-25T18:40:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/415346bb1fe83d0c7a779f8debf0cb57fc0f23e0b47ed1f7c05540a1180f0d15/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 25 18:40:50 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:40:50.876102907Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=4c5a19d7991812f4 traceID=a031fb633320747ddd207b1c01d90f59
	Sep 25 18:40:50 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:40:50.878250243Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=4c5a19d7991812f4 traceID=a031fb633320747ddd207b1c01d90f59
	Sep 25 18:41:03 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:03.027714176Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=0f9f0fa70d6fc92d traceID=7eeaaec6a12e80f724d576f367038285
	Sep 25 18:41:03 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:03.030193818Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=0f9f0fa70d6fc92d traceID=7eeaaec6a12e80f724d576f367038285
	Sep 25 18:41:32 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:32.016791989Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2af4751d728f441a traceID=7aa696718ce0bceba99ede8381916d2a
	Sep 25 18:41:32 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:32.018874015Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2af4751d728f441a traceID=7aa696718ce0bceba99ede8381916d2a
	Sep 25 18:41:50 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:50.339210870Z" level=info msg="ignoring event" container=415346bb1fe83d0c7a779f8debf0cb57fc0f23e0b47ed1f7c05540a1180f0d15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:41:50 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:50.588970645Z" level=info msg="ignoring event" container=b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:41:50 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:50.647766580Z" level=info msg="ignoring event" container=eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:41:50 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:50.732825423Z" level=info msg="ignoring event" container=42aea5973d0dd26d16573ddd320ff9a5160d12b1f078add073c56550007e3438 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 18:41:50 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:50.799032787Z" level=info msg="ignoring event" container=adab75f552f6ca95d8937832b1e4547a1ff5c23d7d19140dc01ed728e99ef4c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	f67c8f5ba20d6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            4 minutes ago       Exited              gadget                                   6                   1c3d0a26b789e       gadget-klq6x
	0fc58426b7166       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   d4c31a54c3c06       gcp-auth-89d5ffd79-pnqvf
	c5664d9ec2161       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   bb2b2414aff14       csi-hostpathplugin-8xdw7
	edfd4f3aeaf2c       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   bb2b2414aff14       csi-hostpathplugin-8xdw7
	9d43e4f0e4b21       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   bb2b2414aff14       csi-hostpathplugin-8xdw7
	9f5191c90b59d       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   bb2b2414aff14       csi-hostpathplugin-8xdw7
	7490bd1f9d575       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   bb2b2414aff14       csi-hostpathplugin-8xdw7
	8094f730f1dff       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   728e5af95ebd4       csi-hostpath-resizer-0
	d9b117fe732bc       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   bb2b2414aff14       csi-hostpathplugin-8xdw7
	e94995bc970cb       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   99ea965287f3e       csi-hostpath-attacher-0
	758f18afebdc4       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   7ab25bed647bd       snapshot-controller-56fcc65765-smq7t
	029e206f33ec2       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   e1be083360544       snapshot-controller-56fcc65765-plm79
	f3eb330f8e9c8       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   b879f3010c7ec       local-path-provisioner-86d989889c-ns2rp
	c71523aee2964       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   506664c420562       yakd-dashboard-67d98fc6b-pd9f6
	db26d15c05c29       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   d32de4c1516f1       metrics-server-84c5f94fbc-5lrcn
	eeffae39df349       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              11 minutes ago      Exited              registry-proxy                           0                   adab75f552f6c       registry-proxy-wxxhj
	b1e7a6c95c0b3       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             11 minutes ago      Exited              registry                                 0                   42aea5973d0dd       registry-66c9cd494c-hjbk7
	08a801c6fb611       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               11 minutes ago      Running             cloud-spanner-emulator                   0                   374aeae3a445f       cloud-spanner-emulator-5b584cc74-t6jwv
	5f709a41d3bef       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   9cfaa0d1c61e2       nvidia-device-plugin-daemonset-spbz5
	52f5a7a4d3f33       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   e85657bb89546       storage-provisioner
	13a3bc32ac447       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   d52a0a37a47cf       coredns-7c65d6cfc9-mgqmr
	e6c312f7f1afb       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   01924e24f8a6d       kube-proxy-5ms7l
	bd8f95275d938       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   090b4902f6933       kube-controller-manager-ubuntu-20-agent-2
	053a2b8fb3519       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   b521914ef847e       kube-apiserver-ubuntu-20-agent-2
	4d1918b8b5d82       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   e5d3ebfe7120a       kube-scheduler-ubuntu-20-agent-2
	1b48eaeaff330       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   3a9533fc1d87c       etcd-ubuntu-20-agent-2
	
	
	==> coredns [13a3bc32ac44] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:58843 - 56094 "HINFO IN 7210559415351330124.1321847438580936925. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066776742s
	[INFO] 10.244.0.23:49930 - 2627 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000381044s
	[INFO] 10.244.0.23:60946 - 32438 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000191239s
	[INFO] 10.244.0.23:43280 - 47948 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134226s
	[INFO] 10.244.0.23:41924 - 21466 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000172823s
	[INFO] 10.244.0.23:47311 - 41096 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119807s
	[INFO] 10.244.0.23:49953 - 3492 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125507s
	[INFO] 10.244.0.23:36555 - 2749 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003946393s
	[INFO] 10.244.0.23:43512 - 7280 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005400834s
	[INFO] 10.244.0.23:35963 - 4796 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00423583s
	[INFO] 10.244.0.23:45359 - 7454 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00467304s
	[INFO] 10.244.0.23:56422 - 12595 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00353299s
	[INFO] 10.244.0.23:54598 - 5313 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005209375s
	[INFO] 10.244.0.23:32804 - 48866 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002523889s
	[INFO] 10.244.0.23:32831 - 28635 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.003957719s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb9e6220ecbd737c1d09ad9630c6f144f437664a
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_25T18_30_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Sep 2024 18:30:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 25 Sep 2024 18:41:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Sep 2024 18:37:35 +0000   Wed, 25 Sep 2024 18:30:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Sep 2024 18:37:35 +0000   Wed, 25 Sep 2024 18:30:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Sep 2024 18:37:35 +0000   Wed, 25 Sep 2024 18:30:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Sep 2024 18:37:35 +0000   Wed, 25 Sep 2024 18:30:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    00a417d5-0a7b-4811-9a16-2ae49d98a388
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     cloud-spanner-emulator-5b584cc74-t6jwv       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-klq6x                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-pnqvf                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-mgqmr                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-8xdw7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-5ms7l                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-5lrcn              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-spbz5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-plm79         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-smq7t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-ns2rp      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-pd9f6               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 23 db d7 94 69 08 06
	[  +1.056461] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a b1 3e de 4c d0 08 06
	[  +0.013100] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 04 03 e8 69 80 08 06
	[Sep25 18:31] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fe f6 fc 54 a9 a1 08 06
	[  +1.551951] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 86 c1 cd c3 81 bc 08 06
	[  +2.042059] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba fe c1 8e fe 77 08 06
	[  +4.549208] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 13 86 4f d1 5d 08 06
	[  +0.000218] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 1a d7 35 36 99 d7 08 06
	[  +0.523811] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 ed 10 19 60 ad 08 06
	[ +35.123201] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 7c 95 13 db ca 08 06
	[  +0.028098] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 13 1f 1d 7b 27 08 06
	[ +11.094611] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 cf 8f 57 1e a9 08 06
	[  +0.000495] IPv4: martian source 10.244.0.23 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 04 d9 da 3a b2 08 06
	
	
	==> etcd [1b48eaeaff33] <==
	{"level":"info","ts":"2024-09-25T18:30:24.437007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-25T18:30:24.437019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-25T18:30:24.437973Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-25T18:30:24.438026Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-25T18:30:24.438053Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-25T18:30:24.438123Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-25T18:30:24.438164Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-25T18:30:24.438199Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-25T18:30:24.438744Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-25T18:30:24.438836Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-25T18:30:24.438861Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-25T18:30:24.439231Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-25T18:30:24.439507Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-25T18:30:24.440931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-25T18:30:24.441498Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-25T18:30:41.002898Z","caller":"traceutil/trace.go:171","msg":"trace[1815222204] transaction","detail":"{read_only:false; response_revision:822; number_of_response:1; }","duration":"124.633512ms","start":"2024-09-25T18:30:40.878245Z","end":"2024-09-25T18:30:41.002878Z","steps":["trace[1815222204] 'process raft request'  (duration: 122.108788ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-25T18:30:41.003969Z","caller":"traceutil/trace.go:171","msg":"trace[155609340] linearizableReadLoop","detail":"{readStateIndex:843; appliedIndex:841; }","duration":"124.399571ms","start":"2024-09-25T18:30:40.879557Z","end":"2024-09-25T18:30:41.003956Z","steps":["trace[155609340] 'read index received'  (duration: 120.895111ms)","trace[155609340] 'applied index is now lower than readState.Index'  (duration: 3.503772ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-25T18:30:41.003992Z","caller":"traceutil/trace.go:171","msg":"trace[730131621] transaction","detail":"{read_only:false; response_revision:823; number_of_response:1; }","duration":"125.731579ms","start":"2024-09-25T18:30:40.878247Z","end":"2024-09-25T18:30:41.003978Z","steps":["trace[730131621] 'process raft request'  (duration: 125.637436ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-25T18:30:41.004128Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.777679ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-25T18:30:41.004198Z","caller":"traceutil/trace.go:171","msg":"trace[1242519320] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:823; }","duration":"105.870533ms","start":"2024-09-25T18:30:40.898317Z","end":"2024-09-25T18:30:41.004187Z","steps":["trace[1242519320] 'agreement among raft nodes before linearized reading'  (duration: 105.733612ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-25T18:30:41.004138Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.570127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gcp-auth\" ","response":"range_response_count:1 size:716"}
	{"level":"info","ts":"2024-09-25T18:30:41.004289Z","caller":"traceutil/trace.go:171","msg":"trace[2099751762] range","detail":"{range_begin:/registry/namespaces/gcp-auth; range_end:; response_count:1; response_revision:823; }","duration":"124.718851ms","start":"2024-09-25T18:30:40.879553Z","end":"2024-09-25T18:30:41.004272Z","steps":["trace[2099751762] 'agreement among raft nodes before linearized reading'  (duration: 124.479819ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-25T18:40:24.455745Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1687}
	{"level":"info","ts":"2024-09-25T18:40:24.479584Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1687,"took":"23.299281ms","hash":1946569372,"current-db-size-bytes":8171520,"current-db-size":"8.2 MB","current-db-size-in-use-bytes":4370432,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-09-25T18:40:24.479634Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1946569372,"revision":1687,"compact-revision":-1}
	
	
	==> gcp-auth [0fc58426b716] <==
	2024/09/25 18:31:56 GCP Auth Webhook started!
	2024/09/25 18:32:13 Ready to marshal response ...
	2024/09/25 18:32:13 Ready to write response ...
	2024/09/25 18:32:14 Ready to marshal response ...
	2024/09/25 18:32:14 Ready to write response ...
	2024/09/25 18:32:37 Ready to marshal response ...
	2024/09/25 18:32:37 Ready to write response ...
	2024/09/25 18:32:37 Ready to marshal response ...
	2024/09/25 18:32:37 Ready to write response ...
	2024/09/25 18:32:38 Ready to marshal response ...
	2024/09/25 18:32:38 Ready to write response ...
	2024/09/25 18:40:50 Ready to marshal response ...
	2024/09/25 18:40:50 Ready to write response ...
	
	
	==> kernel <==
	 18:41:51 up 24 min,  0 users,  load average: 0.25, 0.31, 0.30
	Linux ubuntu-20-agent-2 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [053a2b8fb351] <==
	W0925 18:31:16.026660       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.96.160.240:443: connect: connection refused
	W0925 18:31:22.102335       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.228.183:443: connect: connection refused
	E0925 18:31:22.102376       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.228.183:443: connect: connection refused" logger="UnhandledError"
	W0925 18:31:44.113288       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.228.183:443: connect: connection refused
	E0925 18:31:44.113321       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.228.183:443: connect: connection refused" logger="UnhandledError"
	W0925 18:31:44.120958       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.228.183:443: connect: connection refused
	E0925 18:31:44.120994       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.228.183:443: connect: connection refused" logger="UnhandledError"
	I0925 18:32:13.915131       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0925 18:32:13.933383       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0925 18:32:27.312060       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0925 18:32:27.344103       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0925 18:32:27.423906       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0925 18:32:27.467826       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0925 18:32:27.467865       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0925 18:32:27.562345       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0925 18:32:27.634412       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0925 18:32:27.675171       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0925 18:32:27.711737       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0925 18:32:28.371671       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0925 18:32:28.562436       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0925 18:32:28.597713       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0925 18:32:28.664109       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0925 18:32:28.676462       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0925 18:32:28.711858       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0925 18:32:28.913641       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [bd8f95275d93] <==
	W0925 18:40:36.388527       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:40:36.388571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:40:40.392767       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:40:40.392810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:40:41.105744       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:40:41.105782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:40:47.737067       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:40:47.737117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:40:48.472650       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:40:48.472806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:41:18.992642       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:41:18.992693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:41:20.676040       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:41:20.676080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:41:23.100012       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:41:23.100055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:41:32.681638       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:41:32.681679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:41:33.792971       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:41:33.793018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:41:37.075731       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:41:37.075775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0925 18:41:40.009162       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 18:41:40.009210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0925 18:41:50.557507       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="3.829µs"
	
	
	==> kube-proxy [e6c312f7f1af] <==
	I0925 18:30:34.146452       1 server_linux.go:66] "Using iptables proxy"
	I0925 18:30:34.377301       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0925 18:30:34.377378       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0925 18:30:34.536342       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0925 18:30:34.536407       1 server_linux.go:169] "Using iptables Proxier"
	I0925 18:30:34.548982       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0925 18:30:34.549346       1 server.go:483] "Version info" version="v1.31.1"
	I0925 18:30:34.549372       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 18:30:34.580167       1 config.go:199] "Starting service config controller"
	I0925 18:30:34.587528       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0925 18:30:34.585952       1 config.go:328] "Starting node config controller"
	I0925 18:30:34.587566       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0925 18:30:34.585904       1 config.go:105] "Starting endpoint slice config controller"
	I0925 18:30:34.587577       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0925 18:30:34.687983       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0925 18:30:34.688057       1 shared_informer.go:320] Caches are synced for service config
	I0925 18:30:34.688377       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4d1918b8b5d8] <==
	W0925 18:30:25.296955       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0925 18:30:25.296976       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 18:30:25.296992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0925 18:30:25.296998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:25.296876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0925 18:30:25.297033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:25.297106       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 18:30:25.297135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:26.210605       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 18:30:26.210652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:26.271081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 18:30:26.271118       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:26.271876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 18:30:26.271911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:26.288211       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 18:30:26.288254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:26.291468       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 18:30:26.291498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:26.303998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0925 18:30:26.304036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:26.508998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 18:30:26.509049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0925 18:30:26.687823       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0925 18:30:26.687868       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0925 18:30:28.595598       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Fri 2024-08-16 02:18:09 UTC, end at Wed 2024-09-25 18:41:51 UTC. --
	Sep 25 18:41:38 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:38.875272   17877 scope.go:117] "RemoveContainer" containerID="f67c8f5ba20d6e3d3737a5f718559c84a0d3a952befbebadf13922595920fc5f"
	Sep 25 18:41:38 ubuntu-20-agent-2 kubelet[17877]: E0925 18:41:38.875461   17877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-klq6x_gadget(24ca8ebe-3d3b-48a7-b44f-4765147ec0d3)\"" pod="gadget/gadget-klq6x" podUID="24ca8ebe-3d3b-48a7-b44f-4765147ec0d3"
	Sep 25 18:41:38 ubuntu-20-agent-2 kubelet[17877]: E0925 18:41:38.877075   17877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="1f264474-f1fa-4dec-9aa1-7fa98df2d232"
	Sep 25 18:41:46 ubuntu-20-agent-2 kubelet[17877]: E0925 18:41:46.877163   17877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="38dabc49-8dc3-45c4-a315-1755541fa563"
	Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.511981   17877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/38dabc49-8dc3-45c4-a315-1755541fa563-gcp-creds\") pod \"38dabc49-8dc3-45c4-a315-1755541fa563\" (UID: \"38dabc49-8dc3-45c4-a315-1755541fa563\") "
	Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.512051   17877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwb58\" (UniqueName: \"kubernetes.io/projected/38dabc49-8dc3-45c4-a315-1755541fa563-kube-api-access-cwb58\") pod \"38dabc49-8dc3-45c4-a315-1755541fa563\" (UID: \"38dabc49-8dc3-45c4-a315-1755541fa563\") "
	Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.512141   17877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38dabc49-8dc3-45c4-a315-1755541fa563-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "38dabc49-8dc3-45c4-a315-1755541fa563" (UID: "38dabc49-8dc3-45c4-a315-1755541fa563"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.513897   17877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38dabc49-8dc3-45c4-a315-1755541fa563-kube-api-access-cwb58" (OuterVolumeSpecName: "kube-api-access-cwb58") pod "38dabc49-8dc3-45c4-a315-1755541fa563" (UID: "38dabc49-8dc3-45c4-a315-1755541fa563"). InnerVolumeSpecName "kube-api-access-cwb58". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.613164   17877 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cwb58\" (UniqueName: \"kubernetes.io/projected/38dabc49-8dc3-45c4-a315-1755541fa563-kube-api-access-cwb58\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.613191   17877 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/38dabc49-8dc3-45c4-a315-1755541fa563-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: E0925 18:41:50.877407   17877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="1f264474-f1fa-4dec-9aa1-7fa98df2d232"
	Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.915761   17877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfvdr\" (UniqueName: \"kubernetes.io/projected/a9c82301-9560-4dd9-a31e-55bc04efd0e3-kube-api-access-jfvdr\") pod \"a9c82301-9560-4dd9-a31e-55bc04efd0e3\" (UID: \"a9c82301-9560-4dd9-a31e-55bc04efd0e3\") "
	Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.917577   17877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c82301-9560-4dd9-a31e-55bc04efd0e3-kube-api-access-jfvdr" (OuterVolumeSpecName: "kube-api-access-jfvdr") pod "a9c82301-9560-4dd9-a31e-55bc04efd0e3" (UID: "a9c82301-9560-4dd9-a31e-55bc04efd0e3"). InnerVolumeSpecName "kube-api-access-jfvdr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.016702   17877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mjql\" (UniqueName: \"kubernetes.io/projected/532ef9f6-818b-4628-a77d-5cb0d7ae89b4-kube-api-access-5mjql\") pod \"532ef9f6-818b-4628-a77d-5cb0d7ae89b4\" (UID: \"532ef9f6-818b-4628-a77d-5cb0d7ae89b4\") "
	Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.016848   17877 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jfvdr\" (UniqueName: \"kubernetes.io/projected/a9c82301-9560-4dd9-a31e-55bc04efd0e3-kube-api-access-jfvdr\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.018835   17877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/532ef9f6-818b-4628-a77d-5cb0d7ae89b4-kube-api-access-5mjql" (OuterVolumeSpecName: "kube-api-access-5mjql") pod "532ef9f6-818b-4628-a77d-5cb0d7ae89b4" (UID: "532ef9f6-818b-4628-a77d-5cb0d7ae89b4"). InnerVolumeSpecName "kube-api-access-5mjql". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.117628   17877 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5mjql\" (UniqueName: \"kubernetes.io/projected/532ef9f6-818b-4628-a77d-5cb0d7ae89b4-kube-api-access-5mjql\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.139946   17877 scope.go:117] "RemoveContainer" containerID="eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395"
	Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.185422   17877 scope.go:117] "RemoveContainer" containerID="eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395"
	Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: E0925 18:41:51.186362   17877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395" containerID="eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395"
	Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.186411   17877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395"} err="failed to get container status \"eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395\": rpc error: code = Unknown desc = Error response from daemon: No such container: eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395"
	Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.186445   17877 scope.go:117] "RemoveContainer" containerID="b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06"
	Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.205006   17877 scope.go:117] "RemoveContainer" containerID="b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06"
	Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: E0925 18:41:51.205896   17877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06" containerID="b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06"
	Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.205932   17877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06"} err="failed to get container status \"b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06\": rpc error: code = Unknown desc = Error response from daemon: No such container: b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06"
	
	
	==> storage-provisioner [52f5a7a4d3f3] <==
	I0925 18:30:35.407151       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0925 18:30:35.420133       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0925 18:30:35.420176       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0925 18:30:35.434319       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0925 18:30:35.434511       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e2a10086-b7eb-45bf-bc61-508339615f6d!
	I0925 18:30:35.435433       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"035cb8a6-9c68-4115-8f77-55377d6f71f5", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_e2a10086-b7eb-45bf-bc61-508339615f6d became leader
	I0925 18:30:35.534888       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e2a10086-b7eb-45bf-bc61-508339615f6d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-2/10.138.0.48
	Start Time:       Wed, 25 Sep 2024 18:32:37 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qntx9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qntx9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-2
	  Normal   Pulling    7m42s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m41s (x4 over 9m13s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m41s (x4 over 9m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m31s (x6 over 9m13s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m8s (x21 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.78s)

                                                
                                    

Test pass (110/167)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.02
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 0.96
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.05
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.55
22 TestOffline 41.5
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 102.28
29 TestAddons/serial/Volcano 40.06
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 10.43
36 TestAddons/parallel/MetricsServer 5.36
38 TestAddons/parallel/CSI 46.29
39 TestAddons/parallel/Headlamp 15.82
40 TestAddons/parallel/CloudSpanner 5.24
42 TestAddons/parallel/NvidiaDevicePlugin 5.23
43 TestAddons/parallel/Yakd 10.4
44 TestAddons/StoppedEnableDisable 10.71
46 TestCertExpiration 228.77
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 26.08
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 29.27
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 0.06
64 TestFunctional/serial/MinikubeKubectlCmd 0.1
65 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
66 TestFunctional/serial/ExtraConfig 35.5
67 TestFunctional/serial/ComponentHealth 0.06
68 TestFunctional/serial/LogsCmd 0.78
69 TestFunctional/serial/LogsFileCmd 0.8
70 TestFunctional/serial/InvalidService 4.01
72 TestFunctional/parallel/ConfigCmd 0.26
73 TestFunctional/parallel/DashboardCmd 6.14
74 TestFunctional/parallel/DryRun 0.16
75 TestFunctional/parallel/InternationalLanguage 0.08
76 TestFunctional/parallel/StatusCmd 0.39
79 TestFunctional/parallel/ProfileCmd/profile_not_create 0.21
80 TestFunctional/parallel/ProfileCmd/profile_list 0.2
81 TestFunctional/parallel/ProfileCmd/profile_json_output 0.21
83 TestFunctional/parallel/ServiceCmd/DeployApp 10.16
84 TestFunctional/parallel/ServiceCmd/List 0.33
85 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
86 TestFunctional/parallel/ServiceCmd/HTTPS 0.14
87 TestFunctional/parallel/ServiceCmd/Format 0.14
88 TestFunctional/parallel/ServiceCmd/URL 0.14
89 TestFunctional/parallel/ServiceCmdConnect 8.29
90 TestFunctional/parallel/AddonsCmd 0.11
91 TestFunctional/parallel/PersistentVolumeClaim 20.45
94 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.26
95 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
97 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.17
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
99 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
103 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
106 TestFunctional/parallel/MySQL 20.67
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 12.39
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 13.33
115 TestFunctional/parallel/NodeLabels 0.06
119 TestFunctional/parallel/Version/short 0.04
120 TestFunctional/parallel/Version/components 0.37
121 TestFunctional/parallel/License 0.19
122 TestFunctional/delete_echo-server_images 0.03
123 TestFunctional/delete_my-image_image 0.01
124 TestFunctional/delete_minikube_cached_images 0.01
129 TestImageBuild/serial/Setup 14.07
130 TestImageBuild/serial/NormalBuild 1.65
131 TestImageBuild/serial/BuildWithBuildArg 0.8
132 TestImageBuild/serial/BuildWithDockerIgnore 0.67
133 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.67
137 TestJSONOutput/start/Command 29.76
138 TestJSONOutput/start/Audit 0
140 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
141 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
143 TestJSONOutput/pause/Command 0.49
144 TestJSONOutput/pause/Audit 0
146 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/unpause/Command 0.4
150 TestJSONOutput/unpause/Audit 0
152 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/stop/Command 5.3
156 TestJSONOutput/stop/Audit 0
158 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
160 TestErrorJSONOutput 0.19
165 TestMainNoArgs 0.04
166 TestMinikubeProfile 32.47
174 TestPause/serial/Start 25.16
175 TestPause/serial/SecondStartNoReconfiguration 32.97
176 TestPause/serial/Pause 0.49
177 TestPause/serial/VerifyStatus 0.12
178 TestPause/serial/Unpause 0.39
179 TestPause/serial/PauseAgain 0.53
180 TestPause/serial/DeletePaused 1.6
181 TestPause/serial/VerifyDeletedResources 0.06
195 TestRunningBinaryUpgrade 68.39
197 TestStoppedBinaryUpgrade/Setup 0.64
198 TestStoppedBinaryUpgrade/Upgrade 49.18
199 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
200 TestKubernetesUpgrade 305.34
x
+
TestDownloadOnly/v1.20.0/json-events (5.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (5.01952554s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (55.093547ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/25 18:29:26
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 18:29:26.491750   12727 out.go:345] Setting OutFile to fd 1 ...
	I0925 18:29:26.491841   12727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:29:26.491847   12727 out.go:358] Setting ErrFile to fd 2...
	I0925 18:29:26.491853   12727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:29:26.492075   12727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-5898/.minikube/bin
	W0925 18:29:26.492213   12727 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19681-5898/.minikube/config/config.json: open /home/jenkins/minikube-integration/19681-5898/.minikube/config/config.json: no such file or directory
	I0925 18:29:26.492859   12727 out.go:352] Setting JSON to true
	I0925 18:29:26.493778   12727 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":708,"bootTime":1727288258,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 18:29:26.493870   12727 start.go:139] virtualization: kvm guest
	I0925 18:29:26.496271   12727 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0925 18:29:26.496386   12727 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19681-5898/.minikube/cache/preloaded-tarball: no such file or directory
	I0925 18:29:26.496458   12727 notify.go:220] Checking for updates...
	I0925 18:29:26.497748   12727 out.go:169] MINIKUBE_LOCATION=19681
	I0925 18:29:26.499179   12727 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 18:29:26.500619   12727 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19681-5898/kubeconfig
	I0925 18:29:26.501889   12727 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-5898/.minikube
	I0925 18:29:26.503298   12727 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (0.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
--- PASS: TestDownloadOnly/v1.31.1/json-events (0.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (53.544062ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/25 18:29:31
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 18:29:31.788764   12881 out.go:345] Setting OutFile to fd 1 ...
	I0925 18:29:31.788863   12881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:29:31.788872   12881 out.go:358] Setting ErrFile to fd 2...
	I0925 18:29:31.788876   12881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:29:31.789037   12881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-5898/.minikube/bin
	I0925 18:29:31.789551   12881 out.go:352] Setting JSON to true
	I0925 18:29:31.790418   12881 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":714,"bootTime":1727288258,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 18:29:31.790504   12881 start.go:139] virtualization: kvm guest
	I0925 18:29:31.792579   12881 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0925 18:29:31.792685   12881 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19681-5898/.minikube/cache/preloaded-tarball: no such file or directory
	I0925 18:29:31.792716   12881 notify.go:220] Checking for updates...
	I0925 18:29:31.794094   12881 out.go:169] MINIKUBE_LOCATION=19681
	I0925 18:29:31.795557   12881 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 18:29:31.796936   12881 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19681-5898/kubeconfig
	I0925 18:29:31.798191   12881 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-5898/.minikube
	I0925 18:29:31.799444   12881 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
I0925 18:29:33.226893   12715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:39913 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (41.5s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (39.908771619s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.593795907s)
--- PASS: TestOffline (41.50s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (44.938244ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (45.80961ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (102.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm: (1m42.281802008s)
--- PASS: TestAddons/Setup (102.28s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.06s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 8.363565ms
addons_test.go:835: volcano-scheduler stabilized in 8.85907ms
addons_test.go:843: volcano-admission stabilized in 9.033561ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-rq9nt" [c7235f7c-c51f-4f6f-a54d-a8c7dd855c9b] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003246564s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-258lt" [4d4dbe4f-45c7-43f9-854e-56e336af1a27] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003803613s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-bb6cp" [f4fa8626-5aab-4ed8-8d28-041d402f63fd] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003246265s
addons_test.go:870: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [7181cb5f-c344-4965-af27-50727a2db94f] Pending
helpers_test.go:344: "test-job-nginx-0" [7181cb5f-c344-4965-af27-50727a2db94f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [7181cb5f-c344-4965-af27-50727a2db94f] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003857723s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.728025718s)
--- PASS: TestAddons/serial/Volcano (40.06s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.43s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-klq6x" [24ca8ebe-3d3b-48a7-b44f-4765147ec0d3] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003713106s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.426865987s)
--- PASS: TestAddons/parallel/InspektorGadget (10.43s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.260367ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-5lrcn" [5acd34ff-05b5-4944-98e6-9578b96dd661] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00396261s
addons_test.go:413: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I0925 18:42:07.649660   12715 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0925 18:42:07.654366   12715 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0925 18:42:07.654396   12715 kapi.go:107] duration metric: took 4.742065ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.758181ms
addons_test.go:508: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d67806dd-9627-4861-b7c4-62abe52615e6] Pending
helpers_test.go:344: "task-pv-pod" [d67806dd-9627-4861-b7c4-62abe52615e6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d67806dd-9627-4861-b7c4-62abe52615e6] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003451713s
addons_test.go:528: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [cff4fbee-89be-4043-8f35-85aca28995a9] Pending
helpers_test.go:344: "task-pv-pod-restore" [cff4fbee-89be-4043-8f35-85aca28995a9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [cff4fbee-89be-4043-8f35-85aca28995a9] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00385346s
addons_test.go:570: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.266308712s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-77tf9" [b067cb73-752e-4c9b-a5d7-f837da23de5e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-77tf9" [b067cb73-752e-4c9b-a5d7-f837da23de5e] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003843207s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.379976595s)
--- PASS: TestAddons/parallel/Headlamp (15.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-t6jwv" [744df245-0574-4bf3-8120-2bacf478a4c1] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003861739s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.23s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-spbz5" [bb5284d5-47c1-4dd6-8540-374b1dd30ffb] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003567211s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.23s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-pd9f6" [92d45406-7c51-497d-9482-88e1b7c10d72] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003373159s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.391979692s)
--- PASS: TestAddons/parallel/Yakd (10.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.415788476s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.71s)

                                                
                                    
x
+
TestCertExpiration (228.77s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.804986114s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (32.252450615s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.714404206s)
--- PASS: TestCertExpiration (228.77s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19681-5898/.minikube/files/etc/test/nested/copy/12715/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (26.08s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (26.079106463s)
--- PASS: TestFunctional/serial/StartWithProxy (26.08s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0925 18:47:57.330643   12715 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (29.271896266s)
functional_test.go:663: soft start took 29.272751668s for "minikube" cluster.
I0925 18:48:26.602946   12715 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (29.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.499333226s)
functional_test.go:761: restart took 35.499437921s for "minikube" cluster.
I0925 18:49:02.410608   12715 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (35.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.78s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd4058505551/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.80s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (156.741441ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://10.138.0.48:31638 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (41.044943ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (40.962158ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/25 18:49:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 47149: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.14s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (74.177362ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19681-5898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-5898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 18:49:14.505682   47514 out.go:345] Setting OutFile to fd 1 ...
	I0925 18:49:14.505778   47514 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:49:14.505782   47514 out.go:358] Setting ErrFile to fd 2...
	I0925 18:49:14.505786   47514 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:49:14.505950   47514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-5898/.minikube/bin
	I0925 18:49:14.506460   47514 out.go:352] Setting JSON to false
	I0925 18:49:14.507440   47514 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1896,"bootTime":1727288258,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 18:49:14.507557   47514 start.go:139] virtualization: kvm guest
	I0925 18:49:14.509967   47514 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0925 18:49:14.511276   47514 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19681-5898/.minikube/cache/preloaded-tarball: no such file or directory
	I0925 18:49:14.511317   47514 notify.go:220] Checking for updates...
	I0925 18:49:14.511337   47514 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 18:49:14.512567   47514 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 18:49:14.513728   47514 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19681-5898/kubeconfig
	I0925 18:49:14.514745   47514 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-5898/.minikube
	I0925 18:49:14.515903   47514 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 18:49:14.516973   47514 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 18:49:14.518384   47514 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 18:49:14.518684   47514 exec_runner.go:51] Run: systemctl --version
	I0925 18:49:14.521541   47514 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 18:49:14.532670   47514 out.go:177] * Using the none driver based on existing profile
	I0925 18:49:14.533883   47514 start.go:297] selected driver: none
	I0925 18:49:14.533893   47514 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 18:49:14.534000   47514 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 18:49:14.534021   47514 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0925 18:49:14.534303   47514 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0925 18:49:14.536540   47514 out.go:201] 
	W0925 18:49:14.537629   47514 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0925 18:49:14.538793   47514 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (78.730348ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19681-5898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-5898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 18:49:14.665706   47544 out.go:345] Setting OutFile to fd 1 ...
	I0925 18:49:14.665811   47544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:49:14.665821   47544 out.go:358] Setting ErrFile to fd 2...
	I0925 18:49:14.665826   47544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0925 18:49:14.666069   47544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-5898/.minikube/bin
	I0925 18:49:14.666563   47544 out.go:352] Setting JSON to false
	I0925 18:49:14.667578   47544 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1897,"bootTime":1727288258,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 18:49:14.667681   47544 start.go:139] virtualization: kvm guest
	I0925 18:49:14.669656   47544 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0925 18:49:14.670871   47544 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19681-5898/.minikube/cache/preloaded-tarball: no such file or directory
	I0925 18:49:14.670920   47544 out.go:177]   - MINIKUBE_LOCATION=19681
	I0925 18:49:14.670924   47544 notify.go:220] Checking for updates...
	I0925 18:49:14.673018   47544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 18:49:14.674287   47544 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19681-5898/kubeconfig
	I0925 18:49:14.675535   47544 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-5898/.minikube
	I0925 18:49:14.676876   47544 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 18:49:14.678108   47544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 18:49:14.679489   47544 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0925 18:49:14.679819   47544 exec_runner.go:51] Run: systemctl --version
	I0925 18:49:14.682443   47544 driver.go:394] Setting default libvirt URI to qemu:///system
	I0925 18:49:14.693539   47544 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0925 18:49:14.694590   47544 start.go:297] selected driver: none
	I0925 18:49:14.694611   47544 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0925 18:49:14.694721   47544 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 18:49:14.694750   47544 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0925 18:49:14.695102   47544 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0925 18:49:14.697631   47544 out.go:201] 
	W0925 18:49:14.698835   47544 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0925 18:49:14.700000   47544 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "150.637019ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "44.551084ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "160.377341ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.252239ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-lv4ff" [550c1a16-bbde-46c2-bdf9-b5d1f3b2782a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-lv4ff" [550c1a16-bbde-46c2-bdf9-b5d1f3b2782a] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003217413s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "320.917355ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.138.0.48:31636
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.138.0.48:31636
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-qvr9l" [8d6ac702-b02c-493c-9c33-c341f5f6fe53] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-qvr9l" [8d6ac702-b02c-493c-9c33-c341f5f6fe53] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003624374s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.138.0.48:32221
functional_test.go:1675: http://10.138.0.48:32221: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-qvr9l

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.138.0.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.138.0.48:32221
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.29s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cb0d10f9-6bcd-4fa0-8d50-02045d19ce2f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003376982s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8de1d000-4446-41c0-af90-1624d31c8a7f] Pending
helpers_test.go:344: "sp-pod" [8de1d000-4446-41c0-af90-1624d31c8a7f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8de1d000-4446-41c0-af90-1624d31c8a7f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00357561s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [35072008-6be6-49e4-8209-5126aa263fc1] Pending
helpers_test.go:344: "sp-pod" [35072008-6be6-49e4-8209-5126aa263fc1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [35072008-6be6-49e4-8209-5126aa263fc1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003493802s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 49255: operation not permitted
helpers_test.go:508: unable to kill pid 49207: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context minikube apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0efe76a8-833e-4b7a-9fc4-61a7e7a477e6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0efe76a8-833e-4b7a-9fc4-61a7e7a477e6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003088681s
I0925 18:50:06.238433   12715 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context minikube get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.125.228 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-vkkp2" [de29bb38-c92f-44a4-bf4c-3a83eab37db1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-vkkp2" [de29bb38-c92f-44a4-bf4c-3a83eab37db1] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003611543s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-vkkp2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-vkkp2 -- mysql -ppassword -e "show databases;": exit status 1 (157.976872ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0925 18:50:23.761107   12715 retry.go:31] will retry after 1.441075728s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-vkkp2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-vkkp2 -- mysql -ppassword -e "show databases;": exit status 1 (108.194661ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0925 18:50:25.312067   12715 retry.go:31] will retry after 1.705133538s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-vkkp2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (12.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (12.392855557s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (12.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (13.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.327909145s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (13.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.071116372s)
--- PASS: TestImageBuild/serial/Setup (14.07s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.651633165s)
--- PASS: TestImageBuild/serial/NormalBuild (1.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (29.76s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (29.75829255s)
--- PASS: TestJSONOutput/start/Command (29.76s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.4s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.40s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.3s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (5.295135015s)
--- PASS: TestJSONOutput/stop/Command (5.30s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.245739ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"42a309a9-8e82-406e-b3ef-26a4e71d0add","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4593c57-9577-4d27-90b1-48c3b6c75e48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19681"}}
	{"specversion":"1.0","id":"8e6470ec-9a7a-4a55-b239-74d93e47a96d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ea841a10-1048-4794-8ee0-023dc0ad5c4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19681-5898/kubeconfig"}}
	{"specversion":"1.0","id":"d4e70f5e-f70e-435a-9230-ab100b9c79dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-5898/.minikube"}}
	{"specversion":"1.0","id":"a1e7cb61-0f90-45e6-a762-748c113d4c00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b30f9f0c-65aa-44f4-b757-c641f562f3be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1ada287a-bbd5-4bfd-a558-0fefd948140b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (32.47s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.42676852s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.242428072s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.23953025s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (32.47s)

                                                
                                    
x
+
TestPause/serial/Start (25.16s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (25.156165648s)
--- PASS: TestPause/serial/Start (25.16s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (32.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (32.969668335s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (32.97s)

                                                
                                    
x
+
TestPause/serial/Pause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.49s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.12s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (124.313464ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.12s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.39s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.39s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.53s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.53s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.6s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.598055777s)
--- PASS: TestPause/serial/DeletePaused (1.60s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (68.39s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1765918962 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1765918962 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (30.339776574s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (34.329845237s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (2.955958481s)
--- PASS: TestRunningBinaryUpgrade (68.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (49.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.955943942 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.955943942 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (13.494541749s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.955943942 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.955943942 -p minikube stop: (23.651683758s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.036897611s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (49.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    
x
+
TestKubernetesUpgrade (305.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (28.649004009s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.301991293s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (73.093902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m15.520585522s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (61.495604ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19681-5898/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-5898/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.427287096s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.250253572s)
--- PASS: TestKubernetesUpgrade (305.34s)

                                                
                                    

Test skip (56/167)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
37 TestAddons/parallel/Olm 0
41 TestAddons/parallel/LocalPath 0
45 TestCertOptions 0
47 TestDockerFlags 0
48 TestForceSystemdFlag 0
49 TestForceSystemdEnv 0
50 TestDockerEnvContainerd 0
51 TestKVMDriverInstallOrUpdate 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
54 TestErrorSpam 0
63 TestFunctional/serial/CacheCmd 0
77 TestFunctional/parallel/MountCmd 0
100 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
102 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
104 TestFunctional/parallel/SSHCmd 0
105 TestFunctional/parallel/CpCmd 0
107 TestFunctional/parallel/FileSync 0
108 TestFunctional/parallel/CertSync 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/ImageCommands 0
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0
125 TestGvisorAddon 0
126 TestMultiControlPlane 0
134 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
161 TestKicCustomNetwork 0
162 TestKicExistingNetwork 0
163 TestKicCustomSubnet 0
164 TestKicStaticIP 0
167 TestMountStart 0
168 TestMultiNode 0
169 TestNetworkPlugins 0
170 TestNoKubernetes 0
171 TestChangeNoneUser 0
182 TestPreload 0
183 TestScheduledStopWindows 0
184 TestScheduledStopUnix 0
185 TestSkaffold 0
188 TestStartStop/group/old-k8s-version 0.13
189 TestStartStop/group/newest-cni 0.13
190 TestStartStop/group/default-k8s-diff-port 0.12
191 TestStartStop/group/no-preload 0.12
192 TestStartStop/group/disable-driver-mounts 0.13
193 TestStartStop/group/embed-certs 0.13
194 TestInsufficientStorage 0
201 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:194: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:916: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.12s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.13s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard