Test Report: none_Linux 19667

                    
                      39f19baf3a7e1c810682dda0eb22abd909c6f2ab:2024-09-18:36273
                    
                

Test fail (1/168)

Order failed test Duration
33 TestAddons/parallel/Registry 71.81
x
+
TestAddons/parallel/Registry (71.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.707367ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-pjkt7" [37c3d12e-c029-446f-ae1c-816691f53587] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00382668s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sr6mh" [6a37092e-8132-4577-a7db-ae572e46da9c] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004161297s
addons_test.go:342: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.082394701s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/18 19:50:09 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:45847               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:38 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:40 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 18 Sep 24 19:40 UTC | 18 Sep 24 19:40 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:38:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:38:34.907477   18358 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:38:34.907618   18358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:34.907627   18358 out.go:358] Setting ErrFile to fd 2...
	I0918 19:38:34.907634   18358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:34.907830   18358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7534/.minikube/bin
	I0918 19:38:34.908455   18358 out.go:352] Setting JSON to false
	I0918 19:38:34.909354   18358 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1264,"bootTime":1726687051,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:38:34.909457   18358 start.go:139] virtualization: kvm guest
	I0918 19:38:34.911772   18358 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0918 19:38:34.913476   18358 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19667-7534/.minikube/cache/preloaded-tarball: no such file or directory
	I0918 19:38:34.913506   18358 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 19:38:34.913600   18358 notify.go:220] Checking for updates...
	I0918 19:38:34.916199   18358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:38:34.917549   18358 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7534/kubeconfig
	I0918 19:38:34.919005   18358 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7534/.minikube
	I0918 19:38:34.920237   18358 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 19:38:34.921486   18358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:38:34.922753   18358 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:38:34.933263   18358 out.go:177] * Using the none driver based on user configuration
	I0918 19:38:34.934518   18358 start.go:297] selected driver: none
	I0918 19:38:34.934530   18358 start.go:901] validating driver "none" against <nil>
	I0918 19:38:34.934539   18358 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:38:34.934580   18358 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0918 19:38:34.934882   18358 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0918 19:38:34.935356   18358 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 19:38:34.935606   18358 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:38:34.935638   18358 cni.go:84] Creating CNI manager for ""
	I0918 19:38:34.935682   18358 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 19:38:34.935692   18358 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 19:38:34.935735   18358 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:38:34.937114   18358 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0918 19:38:34.938522   18358 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/config.json ...
	I0918 19:38:34.938553   18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/config.json: {Name:mk471e6aea9507ca28f3d99688faa029c3efa2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:34.938674   18358 start.go:360] acquireMachinesLock for minikube: {Name:mke448a8cf98932a0732986be6ee893948db3617 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 19:38:34.938704   18358 start.go:364] duration metric: took 18.655µs to acquireMachinesLock for "minikube"
	I0918 19:38:34.938716   18358 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 19:38:34.938777   18358 start.go:125] createHost starting for "" (driver="none")
	I0918 19:38:34.940087   18358 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0918 19:38:34.941302   18358 exec_runner.go:51] Run: systemctl --version
	I0918 19:38:34.943744   18358 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0918 19:38:34.943773   18358 client.go:168] LocalClient.Create starting
	I0918 19:38:34.943833   18358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7534/.minikube/certs/ca.pem
	I0918 19:38:34.943866   18358 main.go:141] libmachine: Decoding PEM data...
	I0918 19:38:34.943892   18358 main.go:141] libmachine: Parsing certificate...
	I0918 19:38:34.943946   18358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7534/.minikube/certs/cert.pem
	I0918 19:38:34.943981   18358 main.go:141] libmachine: Decoding PEM data...
	I0918 19:38:34.944018   18358 main.go:141] libmachine: Parsing certificate...
	I0918 19:38:34.944366   18358 client.go:171] duration metric: took 584.636µs to LocalClient.Create
	I0918 19:38:34.944387   18358 start.go:167] duration metric: took 648.126µs to libmachine.API.Create "minikube"
	I0918 19:38:34.944394   18358 start.go:293] postStartSetup for "minikube" (driver="none")
	I0918 19:38:34.944442   18358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 19:38:34.944470   18358 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 19:38:34.956477   18358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 19:38:34.956497   18358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 19:38:34.956505   18358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 19:38:34.958404   18358 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0918 19:38:34.959558   18358 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7534/.minikube/addons for local assets ...
	I0918 19:38:34.959598   18358 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7534/.minikube/files for local assets ...
	I0918 19:38:34.959617   18358 start.go:296] duration metric: took 15.210878ms for postStartSetup
	I0918 19:38:34.960969   18358 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/config.json ...
	I0918 19:38:34.961185   18358 start.go:128] duration metric: took 22.396746ms to createHost
	I0918 19:38:34.961197   18358 start.go:83] releasing machines lock for "minikube", held for 22.484135ms
	I0918 19:38:34.961903   18358 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 19:38:34.961915   18358 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0918 19:38:34.963877   18358 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 19:38:34.963939   18358 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:38:34.973194   18358 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0918 19:38:34.973225   18358 start.go:495] detecting cgroup driver to use...
	I0918 19:38:34.973269   18358 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0918 19:38:34.973391   18358 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:38:34.994377   18358 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0918 19:38:35.003915   18358 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 19:38:35.014932   18358 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 19:38:35.014981   18358 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 19:38:35.027598   18358 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 19:38:35.038253   18358 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 19:38:35.050503   18358 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 19:38:35.063151   18358 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 19:38:35.071314   18358 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 19:38:35.079341   18358 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0918 19:38:35.090935   18358 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0918 19:38:35.099310   18358 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 19:38:35.107007   18358 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 19:38:35.117755   18358 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0918 19:38:35.315496   18358 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0918 19:38:35.380679   18358 start.go:495] detecting cgroup driver to use...
	I0918 19:38:35.380725   18358 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0918 19:38:35.380829   18358 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:38:35.399969   18358 exec_runner.go:51] Run: which cri-dockerd
	I0918 19:38:35.400823   18358 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0918 19:38:35.408304   18358 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0918 19:38:35.408320   18358 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0918 19:38:35.408351   18358 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0918 19:38:35.415985   18358 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0918 19:38:35.416124   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1070745449 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0918 19:38:35.423120   18358 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0918 19:38:35.637504   18358 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0918 19:38:35.851602   18358 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0918 19:38:35.851767   18358 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0918 19:38:35.851781   18358 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0918 19:38:35.851819   18358 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0918 19:38:35.860528   18358 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0918 19:38:35.860660   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube678808258 /etc/docker/daemon.json
	I0918 19:38:35.868187   18358 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0918 19:38:36.066089   18358 exec_runner.go:51] Run: sudo systemctl restart docker
	I0918 19:38:36.356377   18358 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0918 19:38:36.366969   18358 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0918 19:38:36.381805   18358 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 19:38:36.392738   18358 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0918 19:38:36.610377   18358 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0918 19:38:36.826375   18358 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0918 19:38:37.036994   18358 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0918 19:38:37.051723   18358 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 19:38:37.062064   18358 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0918 19:38:37.282568   18358 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0918 19:38:37.347137   18358 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0918 19:38:37.347212   18358 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0918 19:38:37.348610   18358 start.go:563] Will wait 60s for crictl version
	I0918 19:38:37.348661   18358 exec_runner.go:51] Run: which crictl
	I0918 19:38:37.349542   18358 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0918 19:38:37.379000   18358 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0918 19:38:37.379063   18358 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0918 19:38:37.399329   18358 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0918 19:38:37.421679   18358 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0918 19:38:37.421767   18358 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0918 19:38:37.424400   18358 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0918 19:38:37.425489   18358 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 19:38:37.425593   18358 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 19:38:37.425603   18358 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0918 19:38:37.425681   18358 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0918 19:38:37.425719   18358 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0918 19:38:37.471672   18358 cni.go:84] Creating CNI manager for ""
	I0918 19:38:37.471693   18358 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 19:38:37.471702   18358 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 19:38:37.471722   18358 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 19:38:37.471847   18358 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 19:38:37.471901   18358 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 19:38:37.480725   18358 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0918 19:38:37.480774   18358 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0918 19:38:37.488341   18358 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0918 19:38:37.488343   18358 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0918 19:38:37.488400   18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0918 19:38:37.488341   18358 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0918 19:38:37.488416   18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0918 19:38:37.488461   18358 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:38:37.500152   18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0918 19:38:37.536043   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3334410818 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0918 19:38:37.538516   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2265485867 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0918 19:38:37.569706   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2520207745 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0918 19:38:37.632845   18358 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 19:38:37.641138   18358 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0918 19:38:37.641155   18358 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0918 19:38:37.641192   18358 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0918 19:38:37.648760   18358 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0918 19:38:37.648896   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2331271920 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0918 19:38:37.656197   18358 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0918 19:38:37.656213   18358 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0918 19:38:37.656246   18358 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0918 19:38:37.663160   18358 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 19:38:37.663275   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube520074694 /lib/systemd/system/kubelet.service
	I0918 19:38:37.670317   18358 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0918 19:38:37.670422   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2319329196 /var/tmp/minikube/kubeadm.yaml.new
	I0918 19:38:37.677625   18358 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0918 19:38:37.678880   18358 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0918 19:38:37.874779   18358 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0918 19:38:37.888013   18358 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube for IP: 10.138.0.48
	I0918 19:38:37.888036   18358 certs.go:194] generating shared ca certs ...
	I0918 19:38:37.888051   18358 certs.go:226] acquiring lock for ca certs: {Name:mk65b5fdc4f09d8572cba4b78a9b9522b46d6547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:37.888165   18358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7534/.minikube/ca.key
	I0918 19:38:37.888203   18358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7534/.minikube/proxy-client-ca.key
	I0918 19:38:37.888211   18358 certs.go:256] generating profile certs ...
	I0918 19:38:37.888264   18358 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/client.key
	I0918 19:38:37.888282   18358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/client.crt with IP's: []
	I0918 19:38:38.219920   18358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/client.crt ...
	I0918 19:38:38.219945   18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/client.crt: {Name:mk7a305a245408683f9dc09eec8cdb01252d189d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:38.220068   18358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/client.key ...
	I0918 19:38:38.220077   18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/client.key: {Name:mkef102ad8868bb80cf4d3679d0c36d6221fcc8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:38.220136   18358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0918 19:38:38.220151   18358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0918 19:38:38.403245   18358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0918 19:38:38.403272   18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk8e8f8432a65feae42322cf5536789412a3a331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:38.403427   18358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0918 19:38:38.403440   18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mkdf9557fc428470889289b76459a1ace027e047 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:38.403501   18358 certs.go:381] copying /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.crt
	I0918 19:38:38.403572   18358 certs.go:385] copying /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.key
	I0918 19:38:38.403621   18358 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.key
	I0918 19:38:38.403634   18358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0918 19:38:38.795716   18358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.crt ...
	I0918 19:38:38.795750   18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.crt: {Name:mke9354a5935c18a60f656e73092a57c9dcd390a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:38.795922   18358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.key ...
	I0918 19:38:38.795937   18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.key: {Name:mk8fe9ea55d5723bad0c40dbc5858f67dda4edb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:38.796118   18358 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7534/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 19:38:38.796164   18358 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7534/.minikube/certs/ca.pem (1078 bytes)
	I0918 19:38:38.796202   18358 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7534/.minikube/certs/cert.pem (1123 bytes)
	I0918 19:38:38.796243   18358 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7534/.minikube/certs/key.pem (1675 bytes)
	I0918 19:38:38.796827   18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 19:38:38.796968   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube114346312 /var/lib/minikube/certs/ca.crt
	I0918 19:38:38.806202   18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0918 19:38:38.806349   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3190871643 /var/lib/minikube/certs/ca.key
	I0918 19:38:38.814025   18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 19:38:38.814143   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube714172427 /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 19:38:38.821790   18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 19:38:38.821891   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2316883142 /var/lib/minikube/certs/proxy-client-ca.key
	I0918 19:38:38.829377   18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0918 19:38:38.829478   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2017926993 /var/lib/minikube/certs/apiserver.crt
	I0918 19:38:38.837488   18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 19:38:38.837590   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3103838905 /var/lib/minikube/certs/apiserver.key
	I0918 19:38:38.844975   18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 19:38:38.845077   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2199736669 /var/lib/minikube/certs/proxy-client.crt
	I0918 19:38:38.854065   18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 19:38:38.854171   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube591505349 /var/lib/minikube/certs/proxy-client.key
	I0918 19:38:38.862457   18358 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0918 19:38:38.862475   18358 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:38:38.862502   18358 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:38:38.869981   18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 19:38:38.870151   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2523232502 /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:38:38.878287   18358 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 19:38:38.878398   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1139217587 /var/lib/minikube/kubeconfig
	I0918 19:38:38.886121   18358 exec_runner.go:51] Run: openssl version
	I0918 19:38:38.888859   18358 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 19:38:38.897024   18358 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:38:38.898368   18358 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 18 19:38 /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:38:38.898439   18358 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:38:38.901230   18358 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 19:38:38.909165   18358 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 19:38:38.910233   18358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 19:38:38.910274   18358 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:38:38.910391   18358 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 19:38:38.925250   18358 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 19:38:38.933316   18358 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 19:38:38.941391   18358 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0918 19:38:38.961529   18358 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 19:38:38.969607   18358 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 19:38:38.969627   18358 kubeadm.go:157] found existing configuration files:
	
	I0918 19:38:38.969671   18358 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 19:38:38.977609   18358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 19:38:38.977667   18358 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 19:38:38.985052   18358 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 19:38:38.993469   18358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 19:38:38.993512   18358 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 19:38:39.000403   18358 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 19:38:39.007808   18358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 19:38:39.007846   18358 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 19:38:39.015780   18358 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 19:38:39.022837   18358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 19:38:39.022879   18358 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 19:38:39.029518   18358 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 19:38:39.061084   18358 kubeadm.go:310] W0918 19:38:39.060975   19261 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 19:38:39.061559   18358 kubeadm.go:310] W0918 19:38:39.061524   19261 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 19:38:39.063220   18358 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 19:38:39.063281   18358 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 19:38:39.150170   18358 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 19:38:39.150271   18358 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 19:38:39.150280   18358 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 19:38:39.150284   18358 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 19:38:39.160278   18358 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 19:38:39.163056   18358 out.go:235]   - Generating certificates and keys ...
	I0918 19:38:39.163100   18358 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 19:38:39.163115   18358 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 19:38:39.312261   18358 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 19:38:39.464336   18358 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 19:38:39.646493   18358 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 19:38:39.827871   18358 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 19:38:40.037347   18358 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 19:38:40.037384   18358 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0918 19:38:40.418996   18358 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 19:38:40.419094   18358 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0918 19:38:40.909753   18358 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 19:38:41.422374   18358 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 19:38:41.607093   18358 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 19:38:41.607270   18358 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 19:38:41.957417   18358 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 19:38:42.094341   18358 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 19:38:42.368072   18358 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 19:38:42.649834   18358 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 19:38:42.835540   18358 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 19:38:42.836112   18358 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 19:38:42.838352   18358 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 19:38:42.840660   18358 out.go:235]   - Booting up control plane ...
	I0918 19:38:42.840685   18358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 19:38:42.840700   18358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 19:38:42.840707   18358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 19:38:42.860325   18358 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 19:38:42.864371   18358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 19:38:42.864408   18358 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 19:38:43.102040   18358 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 19:38:43.102065   18358 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 19:38:43.603531   18358 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.48967ms
	I0918 19:38:43.603551   18358 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 19:38:47.605319   18358 kubeadm.go:310] [api-check] The API server is healthy after 4.0017528s
	I0918 19:38:47.616560   18358 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 19:38:47.626499   18358 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 19:38:47.643236   18358 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 19:38:47.643255   18358 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 19:38:47.651082   18358 kubeadm.go:310] [bootstrap-token] Using token: wjrm4f.2vgnn7i37ubo4hzx
	I0918 19:38:47.652697   18358 out.go:235]   - Configuring RBAC rules ...
	I0918 19:38:47.652724   18358 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 19:38:47.655891   18358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 19:38:47.661473   18358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 19:38:47.663804   18358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 19:38:47.667212   18358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 19:38:47.669413   18358 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 19:38:48.011654   18358 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 19:38:48.434584   18358 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 19:38:49.010931   18358 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 19:38:49.011778   18358 kubeadm.go:310] 
	I0918 19:38:49.011796   18358 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 19:38:49.011801   18358 kubeadm.go:310] 
	I0918 19:38:49.011806   18358 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 19:38:49.011810   18358 kubeadm.go:310] 
	I0918 19:38:49.011814   18358 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 19:38:49.011818   18358 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 19:38:49.011823   18358 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 19:38:49.011827   18358 kubeadm.go:310] 
	I0918 19:38:49.011831   18358 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 19:38:49.011835   18358 kubeadm.go:310] 
	I0918 19:38:49.011840   18358 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 19:38:49.011844   18358 kubeadm.go:310] 
	I0918 19:38:49.011848   18358 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 19:38:49.011852   18358 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 19:38:49.011857   18358 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 19:38:49.011862   18358 kubeadm.go:310] 
	I0918 19:38:49.011870   18358 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 19:38:49.011874   18358 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 19:38:49.011880   18358 kubeadm.go:310] 
	I0918 19:38:49.011891   18358 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wjrm4f.2vgnn7i37ubo4hzx \
	I0918 19:38:49.011901   18358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:28cdc99d0457e5db15d389dfa720477b3024488a6161fe0e97e3db0521042b91 \
	I0918 19:38:49.011905   18358 kubeadm.go:310] 	--control-plane 
	I0918 19:38:49.011909   18358 kubeadm.go:310] 
	I0918 19:38:49.011913   18358 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 19:38:49.011920   18358 kubeadm.go:310] 
	I0918 19:38:49.011924   18358 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wjrm4f.2vgnn7i37ubo4hzx \
	I0918 19:38:49.011927   18358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:28cdc99d0457e5db15d389dfa720477b3024488a6161fe0e97e3db0521042b91 
	I0918 19:38:49.014652   18358 cni.go:84] Creating CNI manager for ""
	I0918 19:38:49.014674   18358 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 19:38:49.016166   18358 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 19:38:49.017161   18358 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0918 19:38:49.026365   18358 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 19:38:49.026479   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4167341525 /etc/cni/net.d/1-k8s.conflist
	I0918 19:38:49.037115   18358 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 19:38:49.037197   18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:38:49.037214   18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_18T19_38_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0918 19:38:49.046034   18358 ops.go:34] apiserver oom_adj: -16
	I0918 19:38:49.100270   18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:38:49.601112   18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:38:50.100790   18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:38:50.600275   18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:38:51.100432   18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:38:51.600922   18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:38:52.100560   18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:38:52.600236   18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:38:53.101237   18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:38:53.162945   18358 kubeadm.go:1113] duration metric: took 4.125814699s to wait for elevateKubeSystemPrivileges
	I0918 19:38:53.162982   18358 kubeadm.go:394] duration metric: took 14.252711851s to StartCluster
	I0918 19:38:53.163007   18358 settings.go:142] acquiring lock: {Name:mk3846031f18742dba5e0055936aaf5360b0d10f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:53.163095   18358 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7534/kubeconfig
	I0918 19:38:53.163675   18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/kubeconfig: {Name:mk35981c537c4532b3420938e79612e6eea6d7d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:53.163885   18358 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 19:38:53.163967   18358 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0918 19:38:53.164050   18358 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:38:53.164082   18358 addons.go:69] Setting yakd=true in profile "minikube"
	I0918 19:38:53.164089   18358 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0918 19:38:53.164086   18358 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0918 19:38:53.164100   18358 addons.go:234] Setting addon yakd=true in "minikube"
	I0918 19:38:53.164104   18358 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0918 19:38:53.164093   18358 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0918 19:38:53.164107   18358 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0918 19:38:53.164119   18358 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0918 19:38:53.164123   18358 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0918 19:38:53.164137   18358 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0918 19:38:53.164139   18358 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0918 19:38:53.164146   18358 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0918 19:38:53.164152   18358 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0918 19:38:53.164153   18358 mustload.go:65] Loading cluster: minikube
	I0918 19:38:53.164157   18358 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0918 19:38:53.164111   18358 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0918 19:38:53.164169   18358 host.go:66] Checking if "minikube" exists ...
	I0918 19:38:53.164176   18358 host.go:66] Checking if "minikube" exists ...
	I0918 19:38:53.164183   18358 addons.go:69] Setting volcano=true in profile "minikube"
	I0918 19:38:53.164198   18358 addons.go:234] Setting addon volcano=true in "minikube"
	I0918 19:38:53.164224   18358 host.go:66] Checking if "minikube" exists ...
	I0918 19:38:53.164130   18358 host.go:66] Checking if "minikube" exists ...
	I0918 19:38:53.164340   18358 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:38:53.164480   18358 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0918 19:38:53.164517   18358 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0918 19:38:53.164878   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.164893   18358 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0918 19:38:53.164898   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.164907   18358 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0918 19:38:53.164927   18358 host.go:66] Checking if "minikube" exists ...
	I0918 19:38:53.164157   18358 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0918 19:38:53.164957   18358 host.go:66] Checking if "minikube" exists ...
	I0918 19:38:53.164959   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.164971   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.165007   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:38:53.164130   18358 host.go:66] Checking if "minikube" exists ...
	I0918 19:38:53.165266   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.165284   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.165313   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:38:53.165486   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.165500   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.165527   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:38:53.165528   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.165539   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.164138   18358 addons.go:69] Setting registry=true in profile "minikube"
	I0918 19:38:53.165568   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:38:53.165583   18358 addons.go:234] Setting addon registry=true in "minikube"
	I0918 19:38:53.165608   18358 host.go:66] Checking if "minikube" exists ...
	I0918 19:38:53.165692   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.165709   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.165748   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:38:53.164934   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:38:53.164173   18358 host.go:66] Checking if "minikube" exists ...
	I0918 19:38:53.166229   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.166246   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.166273   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:38:53.164131   18358 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0918 19:38:53.166315   18358 host.go:66] Checking if "minikube" exists ...
	I0918 19:38:53.166424   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.166443   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.166473   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:38:53.164134   18358 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0918 19:38:53.166586   18358 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0918 19:38:53.166614   18358 host.go:66] Checking if "minikube" exists ...
	I0918 19:38:53.164878   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.166644   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.166677   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:38:53.166954   18358 out.go:177] * Configuring local host environment ...
	I0918 19:38:53.164878   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.167105   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.167139   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:38:53.167230   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.167243   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.164880   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.167271   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:38:53.167272   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.167302   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:38:53.164878   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.167843   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.167902   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0918 19:38:53.168260   18358 out.go:270] * 
	W0918 19:38:53.168277   18358 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0918 19:38:53.168285   18358 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0918 19:38:53.168301   18358 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0918 19:38:53.168306   18358 out.go:270] * 
	W0918 19:38:53.168345   18358 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0918 19:38:53.168352   18358 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0918 19:38:53.168358   18358 out.go:270] * 
	W0918 19:38:53.168382   18358 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0918 19:38:53.168389   18358 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0918 19:38:53.168395   18358 out.go:270] * 
	W0918 19:38:53.168401   18358 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0918 19:38:53.168427   18358 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 19:38:53.170345   18358 out.go:177] * Verifying Kubernetes components...
	I0918 19:38:53.172330   18358 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0918 19:38:53.185947   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.187676   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.187749   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.188034   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.188252   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.188278   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.188309   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:38:53.189050   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.191869   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.196693   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.204204   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.208651   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.208716   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.208662   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.208792   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.214379   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.218734   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.218992   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.219044   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.219258   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.219759   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.219791   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.226317   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.226332   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.226347   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.226383   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.226386   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.226394   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.230967   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.231021   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.232885   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.232911   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.234550   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.234610   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.234613   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.234629   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.234780   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.234834   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.235934   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.238710   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.239839   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.239862   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.244205   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.244228   18358 host.go:66] Checking if "minikube" exists ...
	I0918 19:38:53.244897   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.246147   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.247735   18358 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0918 19:38:53.247777   18358 host.go:66] Checking if "minikube" exists ...
	I0918 19:38:53.248423   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.248437   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.248469   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:38:53.250108   18358 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0918 19:38:53.250145   18358 host.go:66] Checking if "minikube" exists ...
	I0918 19:38:53.250736   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:38:53.250755   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:38:53.250786   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:38:53.260710   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.260767   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.264499   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.264528   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.265495   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.267709   18358 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0918 19:38:53.269532   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.269685   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.269706   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.270697   18358 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0918 19:38:53.271527   18358 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0918 19:38:53.271686   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.271901   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.271946   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.273348   18358 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0918 19:38:53.273383   18358 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0918 19:38:53.273509   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube834318975 /etc/kubernetes/addons/yakd-ns.yaml
	I0918 19:38:53.273690   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.273703   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.274909   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.274933   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.277670   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.278848   18358 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0918 19:38:53.279163   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.279565   18358 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0918 19:38:53.280803   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.281334   18358 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 19:38:53.281367   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0918 19:38:53.281499   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1696474448 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 19:38:53.281600   18358 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0918 19:38:53.281686   18358 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0918 19:38:53.281931   18358 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 19:38:53.283088   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.283143   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.283219   18358 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:38:53.283239   18358 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0918 19:38:53.283248   18358 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:38:53.283294   18358 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:38:53.283900   18358 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 19:38:53.284006   18358 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 19:38:53.284130   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube351705620 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 19:38:53.284328   18358 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0918 19:38:53.284492   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.284899   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.285001   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.286977   18358 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0918 19:38:53.288020   18358 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0918 19:38:53.288028   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.288100   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.292568   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.292653   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.294466   18358 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0918 19:38:53.296635   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.297932   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.297957   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.298039   18358 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0918 19:38:53.298069   18358 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0918 19:38:53.298247   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube410085531 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0918 19:38:53.301025   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.301045   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.301361   18358 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0918 19:38:53.301720   18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 19:38:53.302117   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.302759   18358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0918 19:38:53.302794   18358 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0918 19:38:53.302904   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1213085578 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0918 19:38:53.303891   18358 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0918 19:38:53.304868   18358 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0918 19:38:53.304891   18358 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0918 19:38:53.305001   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3135139328 /etc/kubernetes/addons/ig-namespace.yaml
	I0918 19:38:53.305754   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.305802   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.305994   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.306236   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:38:53.307207   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.307502   18358 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0918 19:38:53.308543   18358 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0918 19:38:53.308574   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0918 19:38:53.308692   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4240804950 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0918 19:38:53.308837   18358 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0918 19:38:53.310147   18358 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0918 19:38:53.310178   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0918 19:38:53.310319   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3747871862 /etc/kubernetes/addons/deployment.yaml
	I0918 19:38:53.311558   18358 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0918 19:38:53.311945   18358 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0918 19:38:53.312084   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2254601592 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0918 19:38:53.312125   18358 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0918 19:38:53.312143   18358 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0918 19:38:53.312244   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3138600989 /etc/kubernetes/addons/yakd-sa.yaml
	I0918 19:38:53.315399   18358 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 19:38:53.315429   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0918 19:38:53.315539   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2733771184 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 19:38:53.316513   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 19:38:53.316637   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2632347266 /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:38:53.321624   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.321652   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.322456   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.322506   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.323828   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.323850   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.327118   18358 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0918 19:38:53.327153   18358 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0918 19:38:53.327291   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3470057 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0918 19:38:53.327661   18358 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0918 19:38:53.327684   18358 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0918 19:38:53.327794   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1878468979 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0918 19:38:53.329528   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.329746   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.331462   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:38:53.331508   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:38:53.331971   18358 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 19:38:53.331990   18358 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 19:38:53.332089   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube125866827 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 19:38:53.334379   18358 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0918 19:38:53.334407   18358 out.go:177]   - Using image docker.io/registry:2.8.3
	I0918 19:38:53.334579   18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0918 19:38:53.338522   18358 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0918 19:38:53.339635   18358 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0918 19:38:53.339664   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0918 19:38:53.339778   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1028062374 /etc/kubernetes/addons/registry-rc.yaml
	I0918 19:38:53.339947   18358 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0918 19:38:53.340926   18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:38:53.343316   18358 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0918 19:38:53.343346   18358 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0918 19:38:53.343442   18358 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0918 19:38:53.343652   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3364091950 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0918 19:38:53.346268   18358 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0918 19:38:53.346308   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0918 19:38:53.346836   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1320522976 /etc/kubernetes/addons/volcano-deployment.yaml
	I0918 19:38:53.347874   18358 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0918 19:38:53.347909   18358 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0918 19:38:53.348032   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1806307833 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0918 19:38:53.349514   18358 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 19:38:53.349587   18358 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 19:38:53.350467   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4120198128 /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 19:38:53.350731   18358 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0918 19:38:53.350761   18358 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0918 19:38:53.350877   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3624673884 /etc/kubernetes/addons/yakd-crb.yaml
	I0918 19:38:53.353384   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.353411   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.363527   18358 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0918 19:38:53.366435   18358 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0918 19:38:53.366604   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube123769422 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0918 19:38:53.369446   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.369507   18358 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 19:38:53.369523   18358 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0918 19:38:53.369530   18358 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0918 19:38:53.369570   18358 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0918 19:38:53.371085   18358 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0918 19:38:53.371119   18358 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0918 19:38:53.371311   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube466298229 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0918 19:38:53.376161   18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0918 19:38:53.378586   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:38:53.378613   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:38:53.378924   18358 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0918 19:38:53.378948   18358 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0918 19:38:53.379077   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube971339646 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0918 19:38:53.381215   18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 19:38:53.384378   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:38:53.386720   18358 out.go:177]   - Using image docker.io/busybox:stable
	I0918 19:38:53.388048   18358 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0918 19:38:53.388510   18358 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0918 19:38:53.388533   18358 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0918 19:38:53.388649   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3633779932 /etc/kubernetes/addons/yakd-svc.yaml
	I0918 19:38:53.389304   18358 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 19:38:53.389693   18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0918 19:38:53.389971   18358 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 19:38:53.390003   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0918 19:38:53.390122   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube963188202 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 19:38:53.396067   18358 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0918 19:38:53.396096   18358 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0918 19:38:53.396204   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2384143518 /etc/kubernetes/addons/registry-svc.yaml
	I0918 19:38:53.399419   18358 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 19:38:53.399549   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3632654475 /etc/kubernetes/addons/storageclass.yaml
	I0918 19:38:53.404692   18358 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0918 19:38:53.404910   18358 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0918 19:38:53.405081   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1543457254 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0918 19:38:53.410379   18358 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0918 19:38:53.410413   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0918 19:38:53.410531   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2102482626 /etc/kubernetes/addons/registry-proxy.yaml
	I0918 19:38:53.414877   18358 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0918 19:38:53.414910   18358 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0918 19:38:53.415045   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube226649032 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0918 19:38:53.415380   18358 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0918 19:38:53.415408   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0918 19:38:53.415512   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2668014653 /etc/kubernetes/addons/yakd-dp.yaml
	I0918 19:38:53.417147   18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 19:38:53.422055   18358 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0918 19:38:53.422086   18358 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0918 19:38:53.422191   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3529073702 /etc/kubernetes/addons/ig-role.yaml
	I0918 19:38:53.439289   18358 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0918 19:38:53.439412   18358 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0918 19:38:53.439567   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3092662641 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0918 19:38:53.446509   18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0918 19:38:53.446788   18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0918 19:38:53.447942   18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 19:38:53.468440   18358 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0918 19:38:53.468476   18358 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0918 19:38:53.468618   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1476691340 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0918 19:38:53.475044   18358 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0918 19:38:53.475080   18358 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0918 19:38:53.475220   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3940388329 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0918 19:38:53.500458   18358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0918 19:38:53.500500   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0918 19:38:53.500641   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1583852321 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0918 19:38:53.516511   18358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0918 19:38:53.516554   18358 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0918 19:38:53.516683   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2311948277 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0918 19:38:53.552898   18358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0918 19:38:53.552998   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0918 19:38:53.553237   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4230616940 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0918 19:38:53.581447   18358 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:38:53.581490   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0918 19:38:53.581687   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2802059548 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:38:53.595225   18358 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0918 19:38:53.613240   18358 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0918 19:38:53.613282   18358 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0918 19:38:53.613419   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube132100541 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0918 19:38:53.628312   18358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0918 19:38:53.628348   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0918 19:38:53.628981   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube357447146 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0918 19:38:53.641344   18358 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0918 19:38:53.644243   18358 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0918 19:38:53.644267   18358 node_ready.go:38] duration metric: took 2.895093ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0918 19:38:53.644277   18358 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:38:53.660037   18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:38:53.660599   18358 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0918 19:38:53.661840   18358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 19:38:53.661893   18358 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0918 19:38:53.662066   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3926256249 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 19:38:53.666470   18358 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0918 19:38:53.666497   18358 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0918 19:38:53.666588   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2699613024 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0918 19:38:53.708457   18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 19:38:53.784078   18358 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0918 19:38:53.784118   18358 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0918 19:38:53.784262   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3433676318 /etc/kubernetes/addons/ig-crd.yaml
	I0918 19:38:53.843488   18358 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 19:38:53.843520   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0918 19:38:53.843655   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1481242130 /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 19:38:53.851362   18358 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0918 19:38:53.900598   18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 19:38:54.157159   18358 addons.go:475] Verifying addon registry=true in "minikube"
	I0918 19:38:54.159540   18358 out.go:177] * Verifying registry addon...
	I0918 19:38:54.163861   18358 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0918 19:38:54.172019   18358 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 19:38:54.172041   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:38:54.358275   18358 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0918 19:38:54.359810   18358 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0918 19:38:54.475480   18358 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.028894011s)
	I0918 19:38:54.477821   18358 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0918 19:38:54.676865   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:38:54.819855   18358 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.371869433s)
	I0918 19:38:54.992329   18358 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.091659081s)
	I0918 19:38:55.177576   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:38:55.180020   18358 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0918 19:38:55.180044   18358 pod_ready.go:82] duration metric: took 1.519366013s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0918 19:38:55.180056   18358 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0918 19:38:55.438778   18358 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.778674911s)
	W0918 19:38:55.438813   18358 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 19:38:55.438840   18358 retry.go:31] will retry after 270.743625ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 19:38:55.670370   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:38:55.710471   18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:38:56.174498   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:38:56.524295   18358 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.815780821s)
	I0918 19:38:56.524336   18358 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0918 19:38:56.527887   18358 out.go:177] * Verifying csi-hostpath-driver addon...
	I0918 19:38:56.530653   18358 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0918 19:38:56.541081   18358 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 19:38:56.541111   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:38:56.671800   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:38:56.686309   18358 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.310110448s)
	I0918 19:38:57.035623   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:38:57.168343   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:38:57.186532   18358 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0918 19:38:57.536105   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:38:57.667483   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:38:58.036210   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:38:58.167836   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:38:58.535556   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:38:58.668469   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:38:58.775758   18358 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.0652296s)
	I0918 19:38:59.036007   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:38:59.168282   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:38:59.186316   18358 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0918 19:38:59.186334   18358 pod_ready.go:82] duration metric: took 4.006271162s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0918 19:38:59.186344   18358 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0918 19:38:59.190924   18358 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0918 19:38:59.190942   18358 pod_ready.go:82] duration metric: took 4.590676ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0918 19:38:59.190951   18358 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6rkhh" in "kube-system" namespace to be "Ready" ...
	I0918 19:38:59.195211   18358 pod_ready.go:93] pod "kube-proxy-6rkhh" in "kube-system" namespace has status "Ready":"True"
	I0918 19:38:59.195235   18358 pod_ready.go:82] duration metric: took 4.277487ms for pod "kube-proxy-6rkhh" in "kube-system" namespace to be "Ready" ...
	I0918 19:38:59.195247   18358 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0918 19:38:59.199249   18358 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0918 19:38:59.199268   18358 pod_ready.go:82] duration metric: took 4.013479ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0918 19:38:59.199279   18358 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-w5zgj" in "kube-system" namespace to be "Ready" ...
	I0918 19:38:59.537633   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:38:59.667359   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:00.036250   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:00.167943   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:00.257876   18358 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0918 19:39:00.258106   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3275120385 /var/lib/minikube/google_application_credentials.json
	I0918 19:39:00.267885   18358 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0918 19:39:00.268004   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2008867218 /var/lib/minikube/google_cloud_project
	I0918 19:39:00.279993   18358 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0918 19:39:00.280054   18358 host.go:66] Checking if "minikube" exists ...
	I0918 19:39:00.280579   18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0918 19:39:00.280596   18358 api_server.go:166] Checking apiserver status ...
	I0918 19:39:00.280627   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:39:00.297332   18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
	I0918 19:39:00.308935   18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
	I0918 19:39:00.308994   18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
	I0918 19:39:00.317646   18358 api_server.go:204] freezer state: "THAWED"
	I0918 19:39:00.317674   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:39:00.322680   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:39:00.322736   18358 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0918 19:39:00.343201   18358 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:00.345514   18358 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0918 19:39:00.367329   18358 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0918 19:39:00.367376   18358 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0918 19:39:00.367537   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1759379821 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0918 19:39:00.376199   18358 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0918 19:39:00.376228   18358 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0918 19:39:00.376347   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1096912487 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0918 19:39:00.385778   18358 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 19:39:00.385813   18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0918 19:39:00.385932   18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3046175433 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 19:39:00.396557   18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 19:39:00.535426   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:00.723567   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:01.091397   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:01.148538   18358 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0918 19:39:01.150349   18358 out.go:177] * Verifying gcp-auth addon...
	I0918 19:39:01.152830   18358 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0918 19:39:01.191173   18358 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 19:39:01.191701   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:01.204040   18358 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-w5zgj" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:01.535487   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:01.667399   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:02.035321   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:02.207043   18358 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-w5zgj" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:02.207067   18358 pod_ready.go:82] duration metric: took 3.007779715s for pod "nvidia-device-plugin-daemonset-w5zgj" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:02.207081   18358 pod_ready.go:39] duration metric: took 8.56278401s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:39:02.207101   18358 api_server.go:52] waiting for apiserver process to appear ...
	I0918 19:39:02.207170   18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:39:02.224385   18358 api_server.go:72] duration metric: took 9.055927801s to wait for apiserver process to appear ...
	I0918 19:39:02.224415   18358 api_server.go:88] waiting for apiserver healthz status ...
	I0918 19:39:02.224444   18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0918 19:39:02.228465   18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0918 19:39:02.229432   18358 api_server.go:141] control plane version: v1.31.1
	I0918 19:39:02.229457   18358 api_server.go:131] duration metric: took 5.033146ms to wait for apiserver health ...
	I0918 19:39:02.229467   18358 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 19:39:02.257707   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:02.261953   18358 system_pods.go:59] 17 kube-system pods found
	I0918 19:39:02.261985   18358 system_pods.go:61] "coredns-7c65d6cfc9-zwccs" [63bc68cd-9f53-479a-a2a5-9336a0e5deaf] Running
	I0918 19:39:02.261994   18358 system_pods.go:61] "csi-hostpath-attacher-0" [06c4e199-4378-4232-bde2-37607f7da00d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 19:39:02.262001   18358 system_pods.go:61] "csi-hostpath-resizer-0" [31579844-294c-4f81-aa77-f7b5a6b9db22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 19:39:02.262008   18358 system_pods.go:61] "csi-hostpathplugin-dqj8p" [4aaa885d-1682-4ea1-8104-44fca44ecc93] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 19:39:02.262013   18358 system_pods.go:61] "etcd-ubuntu-20-agent-2" [473ef6bb-310b-4856-ba27-dc8195df0744] Running
	I0918 19:39:02.262019   18358 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [e2fb2a1b-8b37-4761-b413-41976d61b1e8] Running
	I0918 19:39:02.262024   18358 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [ee0e7890-eaa0-4cfe-9507-c0afa36eda0d] Running
	I0918 19:39:02.262029   18358 system_pods.go:61] "kube-proxy-6rkhh" [9389a9dd-4c3b-4a80-8997-902aa16b27fd] Running
	I0918 19:39:02.262033   18358 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [7b8e1543-8de7-45cc-a334-f0a39d7a83fe] Running
	I0918 19:39:02.262040   18358 system_pods.go:61] "metrics-server-84c5f94fbc-7lhq7" [feb14068-ae2c-4ab6-8d0f-81ec97b305a1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 19:39:02.262046   18358 system_pods.go:61] "nvidia-device-plugin-daemonset-w5zgj" [653ea08c-da5c-4557-8a4d-a3a9fd4d1000] Running
	I0918 19:39:02.262067   18358 system_pods.go:61] "registry-66c9cd494c-pjkt7" [37c3d12e-c029-446f-ae1c-816691f53587] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0918 19:39:02.262075   18358 system_pods.go:61] "registry-proxy-sr6mh" [6a37092e-8132-4577-a7db-ae572e46da9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 19:39:02.262081   18358 system_pods.go:61] "snapshot-controller-56fcc65765-75b46" [3a59cf10-8aa3-4471-9606-a07d8292c058] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:02.262086   18358 system_pods.go:61] "snapshot-controller-56fcc65765-g5hms" [54e4458b-6513-488a-8b09-cd4b7c02e213] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:02.262089   18358 system_pods.go:61] "storage-provisioner" [eed0a073-ffd8-4934-9367-a2e95f84bffd] Running
	I0918 19:39:02.262094   18358 system_pods.go:61] "tiller-deploy-b48cc5f79-7zq4s" [abd0f145-1948-4210-a986-4dc65e777296] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0918 19:39:02.262098   18358 system_pods.go:74] duration metric: took 32.626772ms to wait for pod list to return data ...
	I0918 19:39:02.262105   18358 default_sa.go:34] waiting for default service account to be created ...
	I0918 19:39:02.264476   18358 default_sa.go:45] found service account: "default"
	I0918 19:39:02.264496   18358 default_sa.go:55] duration metric: took 2.385201ms for default service account to be created ...
	I0918 19:39:02.264506   18358 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 19:39:02.272325   18358 system_pods.go:86] 17 kube-system pods found
	I0918 19:39:02.272351   18358 system_pods.go:89] "coredns-7c65d6cfc9-zwccs" [63bc68cd-9f53-479a-a2a5-9336a0e5deaf] Running
	I0918 19:39:02.272359   18358 system_pods.go:89] "csi-hostpath-attacher-0" [06c4e199-4378-4232-bde2-37607f7da00d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 19:39:02.272365   18358 system_pods.go:89] "csi-hostpath-resizer-0" [31579844-294c-4f81-aa77-f7b5a6b9db22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 19:39:02.272374   18358 system_pods.go:89] "csi-hostpathplugin-dqj8p" [4aaa885d-1682-4ea1-8104-44fca44ecc93] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 19:39:02.272378   18358 system_pods.go:89] "etcd-ubuntu-20-agent-2" [473ef6bb-310b-4856-ba27-dc8195df0744] Running
	I0918 19:39:02.272382   18358 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [e2fb2a1b-8b37-4761-b413-41976d61b1e8] Running
	I0918 19:39:02.272389   18358 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [ee0e7890-eaa0-4cfe-9507-c0afa36eda0d] Running
	I0918 19:39:02.272393   18358 system_pods.go:89] "kube-proxy-6rkhh" [9389a9dd-4c3b-4a80-8997-902aa16b27fd] Running
	I0918 19:39:02.272397   18358 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [7b8e1543-8de7-45cc-a334-f0a39d7a83fe] Running
	I0918 19:39:02.272408   18358 system_pods.go:89] "metrics-server-84c5f94fbc-7lhq7" [feb14068-ae2c-4ab6-8d0f-81ec97b305a1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 19:39:02.272413   18358 system_pods.go:89] "nvidia-device-plugin-daemonset-w5zgj" [653ea08c-da5c-4557-8a4d-a3a9fd4d1000] Running
	I0918 19:39:02.272425   18358 system_pods.go:89] "registry-66c9cd494c-pjkt7" [37c3d12e-c029-446f-ae1c-816691f53587] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0918 19:39:02.272439   18358 system_pods.go:89] "registry-proxy-sr6mh" [6a37092e-8132-4577-a7db-ae572e46da9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 19:39:02.272448   18358 system_pods.go:89] "snapshot-controller-56fcc65765-75b46" [3a59cf10-8aa3-4471-9606-a07d8292c058] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:02.272457   18358 system_pods.go:89] "snapshot-controller-56fcc65765-g5hms" [54e4458b-6513-488a-8b09-cd4b7c02e213] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:02.272462   18358 system_pods.go:89] "storage-provisioner" [eed0a073-ffd8-4934-9367-a2e95f84bffd] Running
	I0918 19:39:02.272470   18358 system_pods.go:89] "tiller-deploy-b48cc5f79-7zq4s" [abd0f145-1948-4210-a986-4dc65e777296] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0918 19:39:02.272483   18358 system_pods.go:126] duration metric: took 7.970024ms to wait for k8s-apps to be running ...
	I0918 19:39:02.272492   18358 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 19:39:02.272549   18358 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:39:02.287089   18358 system_svc.go:56] duration metric: took 14.585391ms WaitForService to wait for kubelet
	I0918 19:39:02.287116   18358 kubeadm.go:582] duration metric: took 9.11866731s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:39:02.287134   18358 node_conditions.go:102] verifying NodePressure condition ...
	I0918 19:39:02.384598   18358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0918 19:39:02.384626   18358 node_conditions.go:123] node cpu capacity is 8
	I0918 19:39:02.384636   18358 node_conditions.go:105] duration metric: took 97.497748ms to run NodePressure ...
	I0918 19:39:02.384648   18358 start.go:241] waiting for startup goroutines ...
	I0918 19:39:02.535925   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:02.667565   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:03.035275   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:03.166791   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:03.535532   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:03.666536   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:04.035777   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:04.258493   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:04.535117   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:04.667308   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:05.035812   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:05.167286   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:05.535723   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:05.668065   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:06.063251   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:06.167736   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:06.535701   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:06.667160   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:07.035096   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:07.185023   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:07.534607   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:07.666959   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:08.035806   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:08.166612   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:08.535534   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:08.667848   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:09.035202   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:09.167920   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:09.535587   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:09.667145   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:10.035247   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:10.167614   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:10.535653   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:10.756814   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:11.035171   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:11.167210   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:11.535709   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:11.666977   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:12.035834   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:12.167525   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:12.535852   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:12.667484   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:13.035575   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:13.167289   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:13.535743   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:13.666933   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:14.034784   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:14.167604   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:14.534821   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:14.666990   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:15.037185   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:15.167722   18358 kapi.go:107] duration metric: took 21.003861204s to wait for kubernetes.io/minikube-addons=registry ...
	I0918 19:39:15.535194   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:16.036021   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:16.535855   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:17.036073   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:17.557372   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:18.034459   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:18.534437   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:19.035083   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:19.536191   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:20.035212   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:20.535023   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:21.034780   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:21.535299   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:22.036007   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:22.534777   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:23.034827   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:23.534999   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:24.034916   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:24.535403   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:25.035879   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:25.535197   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:26.035608   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:26.535560   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:27.035673   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:27.535090   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:28.036078   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:28.535379   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:29.035760   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:29.536665   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:30.035324   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:30.535621   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:31.036357   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:31.535765   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:32.035249   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:32.535159   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:33.035875   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:33.536487   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:34.036283   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:34.535515   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:35.034848   18358 kapi.go:107] duration metric: took 38.504195995s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0918 19:39:42.656045   18358 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 19:39:42.656065   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:43.155921   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:43.656030   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:44.156242   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:44.655872   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:45.156089   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:45.656087   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:46.156350   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:46.656283   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:47.156308   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:47.657104   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:48.156318   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:48.656097   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:49.156264   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:49.656352   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:50.156525   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:50.656567   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:51.156634   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:51.656528   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:52.156859   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:52.655599   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:53.156772   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:53.656323   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:54.156646   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:54.655626   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:55.155945   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:55.656240   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:56.156423   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:56.656338   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:57.156271   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:57.656312   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:58.156250   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:58.656290   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:59.155956   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:59.656502   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:00.156784   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:00.655901   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:01.155754   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:01.655717   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:02.156023   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:02.655867   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:03.155941   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:03.656225   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:04.156413   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:04.656652   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:05.156196   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:05.662870   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:06.155473   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:06.656662   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:07.156710   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:07.657027   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:08.156176   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:08.656442   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:09.156235   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:09.656111   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:10.156877   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:10.656001   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:11.155849   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:11.656433   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:12.156797   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:12.655663   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:13.156811   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:13.655968   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:14.156741   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:14.655864   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:15.155915   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:15.656416   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:16.156314   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:16.656253   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:17.156024   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:17.656377   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:18.156490   18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:18.656697   18358 kapi.go:107] duration metric: took 1m17.503865455s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0918 19:40:18.664543   18358 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0918 19:40:18.666565   18358 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0918 19:40:18.667968   18358 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0918 19:40:18.669492   18358 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, helm-tiller, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0918 19:40:18.671045   18358 addons.go:510] duration metric: took 1m25.507084849s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner helm-tiller metrics-server yakd storage-provisioner-rancher inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0918 19:40:18.671094   18358 start.go:246] waiting for cluster config update ...
	I0918 19:40:18.671118   18358 start.go:255] writing updated cluster config ...
	I0918 19:40:18.671374   18358 exec_runner.go:51] Run: rm -f paused
	I0918 19:40:18.716122   18358 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 19:40:18.718095   18358 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Mon 2024-08-05 23:30:02 UTC, end at Wed 2024-09-18 19:50:10 UTC. --
	Sep 18 19:42:26 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:42:26.559074445Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 18 19:42:26 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:42:26.561238542Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 18 19:42:36 ubuntu-20-agent-2 cri-dockerd[18918]: time="2024-09-18T19:42:36Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 18 19:42:37 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:42:37.945528863Z" level=info msg="ignoring event" container=d29852014eeef11ed7cfdbb1a666fb5cd6ba83e2d3fea6b8d5e5477d5713e9fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:43:49 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:43:49.553400477Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 18 19:43:49 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:43:49.555476210Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 18 19:45:29 ubuntu-20-agent-2 cri-dockerd[18918]: time="2024-09-18T19:45:29Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 18 19:45:30 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:45:30.770377046Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 18 19:45:30 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:45:30.770373683Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 18 19:45:30 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:45:30.772214529Z" level=error msg="Error running exec 69bbbb8be465456041fd8eae0028f658ee21aa187776c569bd680aba377bb139 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 18 19:45:30 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:45:30.966827221Z" level=info msg="ignoring event" container=46dfa86d512c9c664e0ebb0a672d157fe919a288930db5086acec8e1069ecfd5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:45:39 ubuntu-20-agent-2 cri-dockerd[18918]: time="2024-09-18T19:45:39Z" level=error msg="error getting RW layer size for container ID 'd29852014eeef11ed7cfdbb1a666fb5cd6ba83e2d3fea6b8d5e5477d5713e9fb': Error response from daemon: No such container: d29852014eeef11ed7cfdbb1a666fb5cd6ba83e2d3fea6b8d5e5477d5713e9fb"
	Sep 18 19:45:39 ubuntu-20-agent-2 cri-dockerd[18918]: time="2024-09-18T19:45:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd29852014eeef11ed7cfdbb1a666fb5cd6ba83e2d3fea6b8d5e5477d5713e9fb'"
	Sep 18 19:46:39 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:46:39.552793111Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 18 19:46:39 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:46:39.555242545Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 18 19:49:09 ubuntu-20-agent-2 cri-dockerd[18918]: time="2024-09-18T19:49:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/86654dc69ca9aa697293059ac96a8c2cd9b26b151f2cbb8406753639513b5496/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 18 19:49:10 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:49:10.159211680Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 18 19:49:10 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:49:10.161362470Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 18 19:49:24 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:49:24.544960788Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 18 19:49:24 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:49:24.547232415Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 18 19:49:50 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:49:50.554501691Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 18 19:49:50 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:49:50.556769808Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 18 19:50:09 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:50:09.616945389Z" level=info msg="ignoring event" container=86654dc69ca9aa697293059ac96a8c2cd9b26b151f2cbb8406753639513b5496 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:50:09 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:50:09.951960399Z" level=info msg="ignoring event" container=eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:50:10 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:50:10.129734524Z" level=info msg="ignoring event" container=e98390f2c2154890e22784075d34b1c5f37c489992b45cc13f276c014cc9c41f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	46dfa86d512c9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            4 minutes ago       Exited              gadget                                   6                   b8ec36877581d       gadget-7tl86
	9ac6d59915187       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   9ad8b668615e1       gcp-auth-89d5ffd79-xjxwx
	501aace3f8d42       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   332f3edbae5db       csi-hostpathplugin-dqj8p
	96944db015815       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   332f3edbae5db       csi-hostpathplugin-dqj8p
	ad8bbe941a8f6       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   332f3edbae5db       csi-hostpathplugin-dqj8p
	a4beae5b1d820       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   332f3edbae5db       csi-hostpathplugin-dqj8p
	dcefe7b0fe90c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   332f3edbae5db       csi-hostpathplugin-dqj8p
	a2322603aa9b1       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   2c8d751e3ad08       csi-hostpath-resizer-0
	2c39c3e33bbab       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   332f3edbae5db       csi-hostpathplugin-dqj8p
	6bbad19fd1f17       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   54559f02f1e87       csi-hostpath-attacher-0
	0bb4e1ed88ac0       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   10015aa2d4402       snapshot-controller-56fcc65765-75b46
	5bbd3b9f135cb       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   f42b014678821       snapshot-controller-56fcc65765-g5hms
	44234102ecd81       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   c67dfe37e5cc1       local-path-provisioner-86d989889c-b5hqx
	4744d7174f7c8       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   d10a64be3cebc       yakd-dashboard-67d98fc6b-dbkgq
	cfc2c868c8ecb       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   0fda2223a9da5       metrics-server-84c5f94fbc-7lhq7
	eeb81e732af94       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  11 minutes ago      Running             tiller                                   0                   4e18b9da7151d       tiller-deploy-b48cc5f79-7zq4s
	b402a83186826       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             11 minutes ago      Running             registry                                 0                   96b3410ec14c7       registry-66c9cd494c-pjkt7
	2bf89b49875e7       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               11 minutes ago      Running             cloud-spanner-emulator                   0                   4e3fe0f57bdff       cloud-spanner-emulator-769b77f747-lvrwr
	8676c3e1b5f13       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   6036eafc90f1e       nvidia-device-plugin-daemonset-w5zgj
	8ea02a517c77a       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   97baaa5aa6969       coredns-7c65d6cfc9-zwccs
	4cb614d6a3030       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   1434a351e9054       storage-provisioner
	59fe8f563a56d       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   d3a2b0d3c3234       kube-proxy-6rkhh
	5b8067656dbe6       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   f3c03b3d7053c       etcd-ubuntu-20-agent-2
	273b66fd77173       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   39d9065444c68       kube-controller-manager-ubuntu-20-agent-2
	ff77f2ad8d100       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   54870f7d13a0f       kube-scheduler-ubuntu-20-agent-2
	0796b5b669ba3       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   2f14dd667d96d       kube-apiserver-ubuntu-20-agent-2
	
	
	==> coredns [8ea02a517c77] <==
	[INFO] 10.244.0.10:45518 - 56900 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094297s
	[INFO] 10.244.0.10:33093 - 11528 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00006762s
	[INFO] 10.244.0.10:33093 - 59654 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000106465s
	[INFO] 10.244.0.10:50614 - 65039 "AAAA IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000069348s
	[INFO] 10.244.0.10:50614 - 11539 "A IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000097546s
	[INFO] 10.244.0.10:47318 - 65055 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000066847s
	[INFO] 10.244.0.10:47318 - 51040 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000106249s
	[INFO] 10.244.0.10:49574 - 29013 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000064402s
	[INFO] 10.244.0.10:49574 - 599 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000100027s
	[INFO] 10.244.0.10:56714 - 3989 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000068373s
	[INFO] 10.244.0.10:56714 - 7831 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000120937s
	[INFO] 10.244.0.24:44725 - 50096 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000315148s
	[INFO] 10.244.0.24:35225 - 49988 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000372084s
	[INFO] 10.244.0.24:46601 - 24104 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000098131s
	[INFO] 10.244.0.24:48261 - 29753 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129907s
	[INFO] 10.244.0.24:46048 - 4278 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124406s
	[INFO] 10.244.0.24:36044 - 11132 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000098333s
	[INFO] 10.244.0.24:54692 - 33076 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003488481s
	[INFO] 10.244.0.24:44533 - 39182 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.009063428s
	[INFO] 10.244.0.24:36985 - 15796 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003410589s
	[INFO] 10.244.0.24:39978 - 37997 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005602661s
	[INFO] 10.244.0.24:52101 - 4071 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003358139s
	[INFO] 10.244.0.24:40989 - 10653 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00447799s
	[INFO] 10.244.0.24:55784 - 18915 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002570128s
	[INFO] 10.244.0.24:52846 - 20464 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.002606596s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T19_38_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 19:38:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 19:50:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 19:45:57 +0000   Wed, 18 Sep 2024 19:38:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 19:45:57 +0000   Wed, 18 Sep 2024 19:38:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 19:45:57 +0000   Wed, 18 Sep 2024 19:38:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 19:45:57 +0000   Wed, 18 Sep 2024 19:38:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    31f8c253-41fe-46b0-a38a-68a1f8eb05d1
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-769b77f747-lvrwr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-7tl86                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-xjxwx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-zwccs                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-dqj8p                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-6rkhh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-7lhq7              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-w5zgj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 registry-66c9cd494c-pjkt7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-75b46         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-g5hms         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 tiller-deploy-b48cc5f79-7zq4s                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-b5hqx      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-dbkgq               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 90 81 4f 84 c3 08 06
	[  +1.011345] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3c 64 58 26 a7 08 06
	[  +0.023209] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a 95 a5 d2 f1 f9 08 06
	[  +2.793293] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 e2 95 18 93 53 08 06
	[  +1.934893] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 3d f6 17 6e 9a 08 06
	[  +4.120358] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2a e4 f4 4b 02 af 08 06
	[  +2.922409] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 7a 27 57 39 63 08 06
	[  +0.518245] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 6b 91 d0 03 ee 08 06
	[  +0.125285] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 91 89 a2 4c d3 08 06
	[Sep18 19:40] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 ee 98 0e d8 9a 08 06
	[  +0.027955] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 09 7e 3c f1 68 08 06
	[ +12.014529] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff de 45 41 1b 27 1c 08 06
	[  +0.000498] IPv4: martian source 10.244.0.24 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8a 30 d1 41 41 08 06
	
	
	==> etcd [5b8067656dbe] <==
	{"level":"info","ts":"2024-09-18T19:38:44.971810Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T19:38:44.971811Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T19:38:44.971833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T19:38:44.972153Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T19:38:44.972177Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T19:38:44.972136Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:38:44.972273Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:38:44.972306Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:38:44.972965Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T19:38:44.973025Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T19:38:44.973817Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-18T19:38:44.973817Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T19:39:00.841645Z","caller":"traceutil/trace.go:171","msg":"trace[1078095586] linearizableReadLoop","detail":"{readStateIndex:901; appliedIndex:899; }","duration":"117.920404ms","start":"2024-09-18T19:39:00.723707Z","end":"2024-09-18T19:39:00.841627Z","steps":["trace[1078095586] 'read index received'  (duration: 59.547465ms)","trace[1078095586] 'applied index is now lower than readState.Index'  (duration: 58.372416ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-18T19:39:00.841691Z","caller":"traceutil/trace.go:171","msg":"trace[1807728034] transaction","detail":"{read_only:false; response_revision:882; number_of_response:1; }","duration":"117.950289ms","start":"2024-09-18T19:39:00.723718Z","end":"2024-09-18T19:39:00.841668Z","steps":["trace[1807728034] 'process raft request'  (duration: 117.870867ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:39:00.841824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.093742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/gcp-auth/gcp-auth\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-18T19:39:00.841894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.048169ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ubuntu-20-agent-2\" ","response":"range_response_count:1 size:4457"}
	{"level":"info","ts":"2024-09-18T19:39:00.841689Z","caller":"traceutil/trace.go:171","msg":"trace[1446476925] transaction","detail":"{read_only:false; response_revision:881; number_of_response:1; }","duration":"118.014347ms","start":"2024-09-18T19:39:00.723654Z","end":"2024-09-18T19:39:00.841668Z","steps":["trace[1446476925] 'process raft request'  (duration: 59.64422ms)","trace[1446476925] 'compare'  (duration: 58.185753ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-18T19:39:00.841922Z","caller":"traceutil/trace.go:171","msg":"trace[1793168772] range","detail":"{range_begin:/registry/minions/ubuntu-20-agent-2; range_end:; response_count:1; response_revision:882; }","duration":"118.079655ms","start":"2024-09-18T19:39:00.723834Z","end":"2024-09-18T19:39:00.841914Z","steps":["trace[1793168772] 'agreement among raft nodes before linearized reading'  (duration: 117.971692ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:39:00.841904Z","caller":"traceutil/trace.go:171","msg":"trace[479895819] range","detail":"{range_begin:/registry/services/specs/gcp-auth/gcp-auth; range_end:; response_count:0; response_revision:882; }","duration":"118.189257ms","start":"2024-09-18T19:39:00.723703Z","end":"2024-09-18T19:39:00.841892Z","steps":["trace[479895819] 'agreement among raft nodes before linearized reading'  (duration: 118.003245ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:39:00.841864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.415399ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:39:00.842076Z","caller":"traceutil/trace.go:171","msg":"trace[577211197] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:882; }","duration":"110.644169ms","start":"2024-09-18T19:39:00.731422Z","end":"2024-09-18T19:39:00.842066Z","steps":["trace[577211197] 'agreement among raft nodes before linearized reading'  (duration: 110.399242ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:39:01.087506Z","caller":"traceutil/trace.go:171","msg":"trace[363523474] transaction","detail":"{read_only:false; response_revision:884; number_of_response:1; }","duration":"160.817748ms","start":"2024-09-18T19:39:00.926668Z","end":"2024-09-18T19:39:01.087486Z","steps":["trace[363523474] 'process raft request'  (duration: 73.092748ms)","trace[363523474] 'compare'  (duration: 87.583886ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-18T19:48:45.113295Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1757}
	{"level":"info","ts":"2024-09-18T19:48:45.138171Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1757,"took":"24.392651ms","hash":173522336,"current-db-size-bytes":8388608,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":4485120,"current-db-size-in-use":"4.5 MB"}
	{"level":"info","ts":"2024-09-18T19:48:45.138241Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":173522336,"revision":1757,"compact-revision":-1}
	
	
	==> gcp-auth [9ac6d5991518] <==
	2024/09/18 19:40:17 GCP Auth Webhook started!
	2024/09/18 19:40:33 Ready to marshal response ...
	2024/09/18 19:40:33 Ready to write response ...
	2024/09/18 19:40:34 Ready to marshal response ...
	2024/09/18 19:40:34 Ready to write response ...
	2024/09/18 19:40:57 Ready to marshal response ...
	2024/09/18 19:40:57 Ready to write response ...
	2024/09/18 19:40:57 Ready to marshal response ...
	2024/09/18 19:40:57 Ready to write response ...
	2024/09/18 19:40:57 Ready to marshal response ...
	2024/09/18 19:40:57 Ready to write response ...
	2024/09/18 19:49:09 Ready to marshal response ...
	2024/09/18 19:49:09 Ready to write response ...
	
	
	==> kernel <==
	 19:50:10 up 32 min,  0 users,  load average: 0.20, 0.31, 0.26
	Linux ubuntu-20-agent-2 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [0796b5b669ba] <==
	W0918 19:39:37.543009       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.213.195:443: connect: connection refused
	W0918 19:39:42.146508       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.146.207:443: connect: connection refused
	E0918 19:39:42.146546       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.146.207:443: connect: connection refused" logger="UnhandledError"
	W0918 19:40:04.169852       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.146.207:443: connect: connection refused
	E0918 19:40:04.169897       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.146.207:443: connect: connection refused" logger="UnhandledError"
	W0918 19:40:04.187487       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.146.207:443: connect: connection refused
	E0918 19:40:04.187529       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.146.207:443: connect: connection refused" logger="UnhandledError"
	I0918 19:40:33.974555       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0918 19:40:33.992086       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0918 19:40:47.347190       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0918 19:40:47.356182       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0918 19:40:47.476980       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0918 19:40:47.478011       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0918 19:40:47.489930       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0918 19:40:47.644005       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0918 19:40:47.652613       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0918 19:40:47.656354       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0918 19:40:47.679218       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0918 19:40:48.494772       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0918 19:40:48.509680       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0918 19:40:48.671487       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0918 19:40:48.671479       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0918 19:40:48.680296       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0918 19:40:48.741108       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0918 19:40:48.873665       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [273b66fd7717] <==
	W0918 19:48:47.969622       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:48:47.969661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:48:53.321844       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:48:53.321886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:48:53.972967       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:48:53.973016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:49:09.767195       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:49:09.767238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:49:24.274592       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:49:24.274634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:49:28.822788       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:49:28.822833       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:49:32.200151       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:49:32.200193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:49:33.003296       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:49:33.003343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:49:39.858851       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:49:39.858898       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:49:41.573555       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:49:41.573598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:49:51.512526       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:49:51.512565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:50:09.584235       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:50:09.584282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:50:09.851684       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="9.009µs"
	
	
	==> kube-proxy [59fe8f563a56] <==
	I0918 19:38:55.486418       1 server_linux.go:66] "Using iptables proxy"
	I0918 19:38:55.661914       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0918 19:38:55.661986       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 19:38:55.715923       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0918 19:38:55.716034       1 server_linux.go:169] "Using iptables Proxier"
	I0918 19:38:55.719679       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 19:38:55.720543       1 server.go:483] "Version info" version="v1.31.1"
	I0918 19:38:55.720680       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 19:38:55.722394       1 config.go:105] "Starting endpoint slice config controller"
	I0918 19:38:55.722538       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 19:38:55.722545       1 config.go:199] "Starting service config controller"
	I0918 19:38:55.722679       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 19:38:55.722916       1 config.go:328] "Starting node config controller"
	I0918 19:38:55.723062       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 19:38:55.823613       1 shared_informer.go:320] Caches are synced for node config
	I0918 19:38:55.823760       1 shared_informer.go:320] Caches are synced for service config
	I0918 19:38:55.823805       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ff77f2ad8d10] <==
	W0918 19:38:45.976872       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 19:38:45.976894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:45.976993       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 19:38:45.977021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:45.977130       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 19:38:45.977158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:46.813380       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 19:38:46.813417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:46.815256       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 19:38:46.815283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:46.882030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:38:46.882076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:46.882841       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 19:38:46.882869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:46.967051       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 19:38:46.967099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:47.136295       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:38:47.136331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:47.154646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 19:38:47.154693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:47.172340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 19:38:47.172378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:47.233776       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 19:38:47.233834       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0918 19:38:48.872916       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Mon 2024-08-05 23:30:02 UTC, end at Wed 2024-09-18 19:50:10 UTC. --
	Sep 18 19:49:48 ubuntu-20-agent-2 kubelet[19854]: I0918 19:49:48.409768   19854 scope.go:117] "RemoveContainer" containerID="46dfa86d512c9c664e0ebb0a672d157fe919a288930db5086acec8e1069ecfd5"
	Sep 18 19:49:48 ubuntu-20-agent-2 kubelet[19854]: E0918 19:49:48.409952   19854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-7tl86_gadget(44c6fa29-2386-4528-a289-3494a21ed93b)\"" pod="gadget/gadget-7tl86" podUID="44c6fa29-2386-4528-a289-3494a21ed93b"
	Sep 18 19:49:50 ubuntu-20-agent-2 kubelet[19854]: E0918 19:49:50.557326   19854 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" image="gcr.io/k8s-minikube/busybox:latest"
	Sep 18 19:49:50 ubuntu-20-agent-2 kubelet[19854]: E0918 19:49:50.557502   19854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-test,Image:gcr.io/k8s-minikube/busybox,Command:[],Args:[sh -c wget --spider -S http://registry.kube-system.svc.cluster.local],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tt7m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:n
il,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:true,StdinOnce:true,TTY:true,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod registry-test_default(8b8cf472-1baf-46b6-9123-b83cb79d18b7): ErrImagePull: Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" logger="UnhandledError"
	Sep 18 19:49:50 ubuntu-20-agent-2 kubelet[19854]: E0918 19:49:50.558661   19854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ErrImagePull: \"Error response from daemon: Head \\\"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\\\": unauthorized: authentication failed\"" pod="default/registry-test" podUID="8b8cf472-1baf-46b6-9123-b83cb79d18b7"
	Sep 18 19:49:56 ubuntu-20-agent-2 kubelet[19854]: E0918 19:49:56.408140   19854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="5fe4de29-d893-4ada-954b-8bfaa1ad485a"
	Sep 18 19:50:02 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:02.407367   19854 scope.go:117] "RemoveContainer" containerID="46dfa86d512c9c664e0ebb0a672d157fe919a288930db5086acec8e1069ecfd5"
	Sep 18 19:50:02 ubuntu-20-agent-2 kubelet[19854]: E0918 19:50:02.407536   19854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-7tl86_gadget(44c6fa29-2386-4528-a289-3494a21ed93b)\"" pod="gadget/gadget-7tl86" podUID="44c6fa29-2386-4528-a289-3494a21ed93b"
	Sep 18 19:50:05 ubuntu-20-agent-2 kubelet[19854]: E0918 19:50:05.408450   19854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="8b8cf472-1baf-46b6-9123-b83cb79d18b7"
	Sep 18 19:50:09 ubuntu-20-agent-2 kubelet[19854]: E0918 19:50:09.408411   19854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="5fe4de29-d893-4ada-954b-8bfaa1ad485a"
	Sep 18 19:50:09 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:09.790810   19854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt7m5\" (UniqueName: \"kubernetes.io/projected/8b8cf472-1baf-46b6-9123-b83cb79d18b7-kube-api-access-tt7m5\") pod \"8b8cf472-1baf-46b6-9123-b83cb79d18b7\" (UID: \"8b8cf472-1baf-46b6-9123-b83cb79d18b7\") "
	Sep 18 19:50:09 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:09.790877   19854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8b8cf472-1baf-46b6-9123-b83cb79d18b7-gcp-creds\") pod \"8b8cf472-1baf-46b6-9123-b83cb79d18b7\" (UID: \"8b8cf472-1baf-46b6-9123-b83cb79d18b7\") "
	Sep 18 19:50:09 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:09.790987   19854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b8cf472-1baf-46b6-9123-b83cb79d18b7-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "8b8cf472-1baf-46b6-9123-b83cb79d18b7" (UID: "8b8cf472-1baf-46b6-9123-b83cb79d18b7"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:50:09 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:09.792986   19854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8cf472-1baf-46b6-9123-b83cb79d18b7-kube-api-access-tt7m5" (OuterVolumeSpecName: "kube-api-access-tt7m5") pod "8b8cf472-1baf-46b6-9123-b83cb79d18b7" (UID: "8b8cf472-1baf-46b6-9123-b83cb79d18b7"). InnerVolumeSpecName "kube-api-access-tt7m5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:50:09 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:09.891323   19854 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8b8cf472-1baf-46b6-9123-b83cb79d18b7-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 18 19:50:09 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:09.891353   19854 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tt7m5\" (UniqueName: \"kubernetes.io/projected/8b8cf472-1baf-46b6-9123-b83cb79d18b7-kube-api-access-tt7m5\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.294171   19854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plncf\" (UniqueName: \"kubernetes.io/projected/6a37092e-8132-4577-a7db-ae572e46da9c-kube-api-access-plncf\") pod \"6a37092e-8132-4577-a7db-ae572e46da9c\" (UID: \"6a37092e-8132-4577-a7db-ae572e46da9c\") "
	Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.296453   19854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a37092e-8132-4577-a7db-ae572e46da9c-kube-api-access-plncf" (OuterVolumeSpecName: "kube-api-access-plncf") pod "6a37092e-8132-4577-a7db-ae572e46da9c" (UID: "6a37092e-8132-4577-a7db-ae572e46da9c"). InnerVolumeSpecName "kube-api-access-plncf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.394955   19854 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-plncf\" (UniqueName: \"kubernetes.io/projected/6a37092e-8132-4577-a7db-ae572e46da9c-kube-api-access-plncf\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.422430   19854 scope.go:117] "RemoveContainer" containerID="eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4"
	Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.439528   19854 scope.go:117] "RemoveContainer" containerID="eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4"
	Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: E0918 19:50:10.440478   19854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4" containerID="eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4"
	Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.440524   19854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4"} err="failed to get container status \"eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4\": rpc error: code = Unknown desc = Error response from daemon: No such container: eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4"
	Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.696591   19854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6wzs\" (UniqueName: \"kubernetes.io/projected/37c3d12e-c029-446f-ae1c-816691f53587-kube-api-access-j6wzs\") pod \"37c3d12e-c029-446f-ae1c-816691f53587\" (UID: \"37c3d12e-c029-446f-ae1c-816691f53587\") "
	Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.698520   19854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c3d12e-c029-446f-ae1c-816691f53587-kube-api-access-j6wzs" (OuterVolumeSpecName: "kube-api-access-j6wzs") pod "37c3d12e-c029-446f-ae1c-816691f53587" (UID: "37c3d12e-c029-446f-ae1c-816691f53587"). InnerVolumeSpecName "kube-api-access-j6wzs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	
	
	==> storage-provisioner [4cb614d6a303] <==
	I0918 19:38:55.641900       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 19:38:55.654563       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 19:38:55.654606       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 19:38:55.662092       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 19:38:55.663286       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cdcc2797-2d65-4590-b30e-fc94f03bac3b", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_84cd3cab-f4fa-4515-b9c4-636d9499dcd1 became leader
	I0918 19:38:55.663494       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_84cd3cab-f4fa-4515-b9c4-636d9499dcd1!
	I0918 19:38:55.764572       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_84cd3cab-f4fa-4515-b9c4-636d9499dcd1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox registry-66c9cd494c-pjkt7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox registry-66c9cd494c-pjkt7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context minikube describe pod busybox registry-66c9cd494c-pjkt7: exit status 1 (67.817187ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-2/10.138.0.48
	Start Time:       Wed, 18 Sep 2024 19:40:57 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5bt7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k5bt7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m14s                   default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-2
	  Normal   Pulling    7m45s (x4 over 9m14s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m45s (x4 over 9m13s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m45s (x4 over 9m13s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m29s (x6 over 9m13s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m10s (x20 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "registry-66c9cd494c-pjkt7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context minikube describe pod busybox registry-66c9cd494c-pjkt7: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.81s)

                                                
                                    

Test pass (111/168)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 3.44
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 0.92
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.55
22 TestOffline 42.82
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
27 TestAddons/Setup 103.85
29 TestAddons/serial/Volcano 38.41
31 TestAddons/serial/GCPAuth/Namespaces 0.11
35 TestAddons/parallel/InspektorGadget 10.54
36 TestAddons/parallel/MetricsServer 5.36
37 TestAddons/parallel/HelmTiller 9.11
39 TestAddons/parallel/CSI 30
40 TestAddons/parallel/Headlamp 15.88
41 TestAddons/parallel/CloudSpanner 5.25
43 TestAddons/parallel/NvidiaDevicePlugin 6.23
44 TestAddons/parallel/Yakd 10.4
45 TestAddons/StoppedEnableDisable 10.67
47 TestCertExpiration 227.89
58 TestFunctional/serial/CopySyncFile 0
59 TestFunctional/serial/StartWithProxy 30.82
60 TestFunctional/serial/AuditLog 0
61 TestFunctional/serial/SoftStart 29.34
62 TestFunctional/serial/KubeContext 0.04
63 TestFunctional/serial/KubectlGetPods 0.06
65 TestFunctional/serial/MinikubeKubectlCmd 0.1
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
67 TestFunctional/serial/ExtraConfig 37.77
68 TestFunctional/serial/ComponentHealth 0.07
69 TestFunctional/serial/LogsCmd 0.81
70 TestFunctional/serial/LogsFileCmd 0.86
71 TestFunctional/serial/InvalidService 4.51
73 TestFunctional/parallel/ConfigCmd 0.26
74 TestFunctional/parallel/DashboardCmd 8.08
75 TestFunctional/parallel/DryRun 0.15
76 TestFunctional/parallel/InternationalLanguage 0.08
77 TestFunctional/parallel/StatusCmd 0.42
80 TestFunctional/parallel/ProfileCmd/profile_not_create 0.21
81 TestFunctional/parallel/ProfileCmd/profile_list 0.2
82 TestFunctional/parallel/ProfileCmd/profile_json_output 0.19
84 TestFunctional/parallel/ServiceCmd/DeployApp 10.15
85 TestFunctional/parallel/ServiceCmd/List 0.33
86 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
87 TestFunctional/parallel/ServiceCmd/HTTPS 0.15
88 TestFunctional/parallel/ServiceCmd/Format 0.15
89 TestFunctional/parallel/ServiceCmd/URL 0.15
90 TestFunctional/parallel/ServiceCmdConnect 7.3
91 TestFunctional/parallel/AddonsCmd 0.11
92 TestFunctional/parallel/PersistentVolumeClaim 22.43
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.26
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.18
99 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
100 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
104 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
107 TestFunctional/parallel/MySQL 22.89
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 14.2
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 13.32
116 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/Version/short 0.04
121 TestFunctional/parallel/Version/components 0.38
122 TestFunctional/parallel/License 0.21
123 TestFunctional/delete_echo-server_images 0.03
124 TestFunctional/delete_my-image_image 0.02
125 TestFunctional/delete_minikube_cached_images 0.01
130 TestImageBuild/serial/Setup 13.88
131 TestImageBuild/serial/NormalBuild 1.51
132 TestImageBuild/serial/BuildWithBuildArg 0.81
133 TestImageBuild/serial/BuildWithDockerIgnore 0.6
134 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.54
138 TestJSONOutput/start/Command 30.58
139 TestJSONOutput/start/Audit 0
141 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
142 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
144 TestJSONOutput/pause/Command 0.48
145 TestJSONOutput/pause/Audit 0
147 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/unpause/Command 0.4
151 TestJSONOutput/unpause/Audit 0
153 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/stop/Command 5.31
157 TestJSONOutput/stop/Audit 0
159 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
161 TestErrorJSONOutput 0.19
166 TestMainNoArgs 0.04
167 TestMinikubeProfile 33.56
175 TestPause/serial/Start 29.12
176 TestPause/serial/SecondStartNoReconfiguration 24.39
177 TestPause/serial/Pause 0.5
178 TestPause/serial/VerifyStatus 0.13
179 TestPause/serial/Unpause 0.4
180 TestPause/serial/PauseAgain 0.53
181 TestPause/serial/DeletePaused 1.78
182 TestPause/serial/VerifyDeletedResources 0.06
196 TestRunningBinaryUpgrade 72.38
198 TestStoppedBinaryUpgrade/Setup 0.56
199 TestStoppedBinaryUpgrade/Upgrade 50.22
200 TestStoppedBinaryUpgrade/MinikubeLogs 0.8
201 TestKubernetesUpgrade 309.15
x
+
TestDownloadOnly/v1.20.0/json-events (3.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (3.438757105s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (3.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (54.646591ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:37:46
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:37:46.272069   14449 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:37:46.272161   14449 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:37:46.272167   14449 out.go:358] Setting ErrFile to fd 2...
	I0918 19:37:46.272171   14449 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:37:46.272344   14449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7534/.minikube/bin
	W0918 19:37:46.272478   14449 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19667-7534/.minikube/config/config.json: open /home/jenkins/minikube-integration/19667-7534/.minikube/config/config.json: no such file or directory
	I0918 19:37:46.273046   14449 out.go:352] Setting JSON to true
	I0918 19:37:46.273914   14449 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1215,"bootTime":1726687051,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:37:46.274004   14449 start.go:139] virtualization: kvm guest
	I0918 19:37:46.276266   14449 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 19:37:46.276399   14449 notify.go:220] Checking for updates...
	W0918 19:37:46.276396   14449 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19667-7534/.minikube/cache/preloaded-tarball: no such file or directory
	I0918 19:37:46.277605   14449 out.go:169] MINIKUBE_LOCATION=19667
	I0918 19:37:46.278728   14449 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:37:46.279870   14449 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19667-7534/kubeconfig
	I0918 19:37:46.281011   14449 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7534/.minikube
	I0918 19:37:46.282253   14449 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (0.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
--- PASS: TestDownloadOnly/v1.31.1/json-events (0.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (57.450319ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:37:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:37:49.998574   14603 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:37:49.998696   14603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:37:49.998705   14603 out.go:358] Setting ErrFile to fd 2...
	I0918 19:37:49.998711   14603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:37:49.998911   14603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7534/.minikube/bin
	I0918 19:37:49.999433   14603 out.go:352] Setting JSON to true
	I0918 19:37:50.000235   14603 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1219,"bootTime":1726687051,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:37:50.000328   14603 start.go:139] virtualization: kvm guest
	I0918 19:37:50.002177   14603 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0918 19:37:50.002282   14603 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19667-7534/.minikube/cache/preloaded-tarball: no such file or directory
	I0918 19:37:50.002335   14603 notify.go:220] Checking for updates...
	I0918 19:37:50.003751   14603 out.go:169] MINIKUBE_LOCATION=19667
	I0918 19:37:50.005222   14603 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:37:50.006584   14603 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19667-7534/kubeconfig
	I0918 19:37:50.008135   14603 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7534/.minikube
	I0918 19:37:50.009356   14603 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
I0918 19:37:51.405791   14437 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:45847 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (42.82s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (41.235327449s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.587833834s)
--- PASS: TestOffline (42.82s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (45.17855ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (44.449029ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (103.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller: (1m43.854148628s)
--- PASS: TestAddons/Setup (103.85s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.41s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 7.960103ms
addons_test.go:897: volcano-scheduler stabilized in 8.13854ms
addons_test.go:905: volcano-admission stabilized in 8.268356ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-cblsr" [17c39765-cdd6-42fe-928a-75415c2e12f9] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003845725s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-lk7kk" [8bb8dc69-143b-4c7c-82cc-b6e6204efa43] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003204006s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-c8xcr" [621759e3-323c-499d-9408-73f361ff3900] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003571298s
addons_test.go:932: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [11b52bfb-673d-4c8d-a344-a04e4c77e3f9] Pending
helpers_test.go:344: "test-job-nginx-0" [11b52bfb-673d-4c8d-a344-a04e4c77e3f9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [11b52bfb-673d-4c8d-a344-a04e4c77e3f9] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004017472s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.090585504s)
--- PASS: TestAddons/serial/Volcano (38.41s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.54s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7tl86" [44c6fa29-2386-4528-a289-3494a21ed93b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003774402s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.530923778s)
--- PASS: TestAddons/parallel/InspektorGadget (10.54s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.539472ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-7lhq7" [feb14068-ae2c-4ab6-8d0f-81ec97b305a1] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004037231s
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.36s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.11s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.972856ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-7zq4s" [abd0f145-1948-4210-a986-4dc65e777296] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003226839s
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.817086457s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.11s)

                                                
                                    
x
+
TestAddons/parallel/CSI (30s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I0918 19:50:36.178125   14437 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0918 19:50:36.182422   14437 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0918 19:50:36.182444   14437 kapi.go:107] duration metric: took 4.332327ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.340418ms
addons_test.go:570: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d117ed26-6b3f-46c0-bca8-bbc431202947] Pending
helpers_test.go:344: "task-pv-pod" [d117ed26-6b3f-46c0-bca8-bbc431202947] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d117ed26-6b3f-46c0-bca8-bbc431202947] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.00387037s
addons_test.go:590: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [184fb644-c579-44d1-a96c-66fd8b4617a7] Pending
helpers_test.go:344: "task-pv-pod-restore" [184fb644-c579-44d1-a96c-66fd8b4617a7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [184fb644-c579-44d1-a96c-66fd8b4617a7] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003617827s
addons_test.go:632: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.261400922s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (30.00s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-852qt" [c42ac1f6-218e-4c87-98c0-57fa44eb0192] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-852qt" [c42ac1f6-218e-4c87-98c0-57fa44eb0192] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003918821s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.412925368s)
--- PASS: TestAddons/parallel/Headlamp (15.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-lvrwr" [b37c708b-79b1-4589-ab81-3b60cfd50126] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00317293s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-w5zgj" [653ea08c-da5c-4557-8a4d-a3a9fd4d1000] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003859488s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.23s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-dbkgq" [49577caf-6577-49bc-b289-0a7820c1d91a] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003929442s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.394549504s)
--- PASS: TestAddons/parallel/Yakd (10.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.67s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.39051943s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.67s)

                                                
                                    
x
+
TestCertExpiration (227.89s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.021236565s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (32.189605337s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.673925914s)
--- PASS: TestCertExpiration (227.89s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19667-7534/.minikube/files/etc/test/nested/copy/14437/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (30.82s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (30.822584878s)
--- PASS: TestFunctional/serial/StartWithProxy (30.82s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.34s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0918 19:56:14.516174   14437 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (29.337631927s)
functional_test.go:663: soft start took 29.338361697s for "minikube" cluster.
I0918 19:56:43.854311   14437 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (29.34s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.77s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.769521161s)
functional_test.go:761: restart took 37.769635816s for "minikube" cluster.
I0918 19:57:21.931812   14437 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (37.77s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.81s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd383551405/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.86s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.51s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (159.390011ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://10.138.0.48:32089 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context minikube delete -f testdata/invalidsvc.yaml: (1.176198577s)
--- PASS: TestFunctional/serial/InvalidService (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (42.004382ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (41.405944ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/18 19:57:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 49446: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.08s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (78.744127ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7534/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7534/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:57:36.561542   49812 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:57:36.561665   49812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:57:36.561675   49812 out.go:358] Setting ErrFile to fd 2...
	I0918 19:57:36.561681   49812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:57:36.561862   49812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7534/.minikube/bin
	I0918 19:57:36.562420   49812 out.go:352] Setting JSON to false
	I0918 19:57:36.563353   49812 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2406,"bootTime":1726687051,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:57:36.563442   49812 start.go:139] virtualization: kvm guest
	I0918 19:57:36.565882   49812 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0918 19:57:36.567138   49812 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19667-7534/.minikube/cache/preloaded-tarball: no such file or directory
	I0918 19:57:36.567176   49812 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 19:57:36.567229   49812 notify.go:220] Checking for updates...
	I0918 19:57:36.569798   49812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:57:36.571130   49812 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7534/kubeconfig
	I0918 19:57:36.572501   49812 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7534/.minikube
	I0918 19:57:36.573997   49812 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 19:57:36.575378   49812 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:57:36.577077   49812 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:57:36.577383   49812 exec_runner.go:51] Run: systemctl --version
	I0918 19:57:36.579929   49812 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:57:36.591708   49812 out.go:177] * Using the none driver based on existing profile
	I0918 19:57:36.592834   49812 start.go:297] selected driver: none
	I0918 19:57:36.592847   49812 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:57:36.592965   49812 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:57:36.592992   49812 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0918 19:57:36.593302   49812 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0918 19:57:36.595404   49812 out.go:201] 
	W0918 19:57:36.596513   49812 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0918 19:57:36.597653   49812 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (79.687205ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7534/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7534/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:57:36.714753   49841 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:57:36.714845   49841 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:57:36.714852   49841 out.go:358] Setting ErrFile to fd 2...
	I0918 19:57:36.714857   49841 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:57:36.715134   49841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7534/.minikube/bin
	I0918 19:57:36.715661   49841 out.go:352] Setting JSON to false
	I0918 19:57:36.716615   49841 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2406,"bootTime":1726687051,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:57:36.716706   49841 start.go:139] virtualization: kvm guest
	I0918 19:57:36.718864   49841 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0918 19:57:36.720236   49841 out.go:177]   - MINIKUBE_LOCATION=19667
	W0918 19:57:36.720229   49841 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19667-7534/.minikube/cache/preloaded-tarball: no such file or directory
	I0918 19:57:36.720267   49841 notify.go:220] Checking for updates...
	I0918 19:57:36.722709   49841 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:57:36.724013   49841 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7534/kubeconfig
	I0918 19:57:36.725216   49841 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7534/.minikube
	I0918 19:57:36.726779   49841 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 19:57:36.728044   49841 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:57:36.729717   49841 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:57:36.730008   49841 exec_runner.go:51] Run: systemctl --version
	I0918 19:57:36.732551   49841 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:57:36.743397   49841 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0918 19:57:36.745366   49841 start.go:297] selected driver: none
	I0918 19:57:36.745380   49841 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:57:36.745478   49841 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:57:36.745502   49841 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0918 19:57:36.745814   49841 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0918 19:57:36.748198   49841 out.go:201] 
	W0918 19:57:36.749645   49841 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0918 19:57:36.751109   49841 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "152.726934ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.626638ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "145.893478ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "45.681167ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-9pzkb" [4c78fb5d-c182-4781-ad34-54ec2157a256] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-9pzkb" [4c78fb5d-c182-4781-ad34-54ec2157a256] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003875336s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "326.795095ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.138.0.48:31974
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.138.0.48:31974
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-mztw2" [04443ba8-a77d-4479-bdb6-d0145437d694] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-mztw2" [04443ba8-a77d-4479-bdb6-d0145437d694] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004137323s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.138.0.48:31858
functional_test.go:1675: http://10.138.0.48:31858: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-mztw2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.138.0.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.138.0.48:31858
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.30s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4b4314aa-f3e8-4aeb-a2f0-749bcddea97c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003217443s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8d4eb897-b868-431b-b356-de5e901e3bcb] Pending
helpers_test.go:344: "sp-pod" [8d4eb897-b868-431b-b356-de5e901e3bcb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8d4eb897-b868-431b-b356-de5e901e3bcb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003170808s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [da677865-6fff-482c-95ba-1090a18ee5c0] Pending
helpers_test.go:344: "sp-pod" [da677865-6fff-482c-95ba-1090a18ee5c0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [da677865-6fff-482c-95ba-1090a18ee5c0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00287497s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 51528: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context minikube apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f141794f-8688-4e46-af17-05860a0c09f6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f141794f-8688-4e46-af17-05860a0c09f6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003990943s
I0918 19:58:28.304654   14437 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context minikube get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.90.138 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-l9nkj" [874016db-27d9-4f31-a305-f430fc797184] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-l9nkj" [874016db-27d9-4f31-a305-f430fc797184] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.004216819s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-l9nkj -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-l9nkj -- mysql -ppassword -e "show databases;": exit status 1 (114.350452ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0918 19:58:45.787741   14437 retry.go:31] will retry after 630.168431ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-l9nkj -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-l9nkj -- mysql -ppassword -e "show databases;": exit status 1 (147.149729ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0918 19:58:46.565673   14437 retry.go:31] will retry after 1.857794835s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-l9nkj -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-l9nkj -- mysql -ppassword -e "show databases;": exit status 1 (118.396449ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0918 19:58:48.543215   14437 retry.go:31] will retry after 2.755039414s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-l9nkj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.89s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.195699461s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (13.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.314855478s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (13.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (13.88s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.884820287s)
--- PASS: TestImageBuild/serial/Setup (13.88s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.511020202s)
--- PASS: TestImageBuild/serial/NormalBuild (1.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.60s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (30.58s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (30.579745244s)
--- PASS: TestJSONOutput/start/Command (30.58s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.4s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.40s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.31s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (5.30518467s)
--- PASS: TestJSONOutput/stop/Command (5.31s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.994574ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"64143a7f-a0af-4934-bdca-2092f61a1812","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fcfc5f8c-4dde-4a7d-a9a8-65ebd7ce4eb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19667"}}
	{"specversion":"1.0","id":"117fa457-7a40-429b-812a-1b96c6e27090","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"85c32d31-fee4-4981-9fea-1f73348139e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19667-7534/kubeconfig"}}
	{"specversion":"1.0","id":"007589f9-6ff9-4dce-9743-d81dbe69bea6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7534/.minikube"}}
	{"specversion":"1.0","id":"14c94550-491e-4b62-b42a-be0048c48545","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"687ddcda-2c92-4824-a0e2-e93b8cf9b26c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"80efaea9-5938-4936-8c40-67612bcd95e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (33.56s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.788653731s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.959488734s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.257823874s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (33.56s)

                                                
                                    
x
+
TestPause/serial/Start (29.12s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (29.120832052s)
--- PASS: TestPause/serial/Start (29.12s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (24.39s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (24.388913443s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (24.39s)

                                                
                                    
x
+
TestPause/serial/Pause (0.5s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.50s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (125.481142ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.4s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.40s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.53s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.53s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.78s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.77733258s)
--- PASS: TestPause/serial/DeletePaused (1.78s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (72.38s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2707913626 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2707913626 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (33.250390541s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (35.063000892s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.204502955s)
--- PASS: TestRunningBinaryUpgrade (72.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (50.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3803691859 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3803691859 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.567603504s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3803691859 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3803691859 -p minikube stop: (23.647560198s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.002496593s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (50.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (309.15s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (31.199854417s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.803690558s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (68.831008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m16.588471109s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (69.99905ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7534/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7534/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.045845786s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.317613442s)
--- PASS: TestKubernetesUpgrade (309.15s)

                                                
                                    

Test skip (56/168)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
38 TestAddons/parallel/Olm 0
42 TestAddons/parallel/LocalPath 0
46 TestCertOptions 0
48 TestDockerFlags 0
49 TestForceSystemdFlag 0
50 TestForceSystemdEnv 0
51 TestDockerEnvContainerd 0
52 TestKVMDriverInstallOrUpdate 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
55 TestErrorSpam 0
64 TestFunctional/serial/CacheCmd 0
78 TestFunctional/parallel/MountCmd 0
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
102 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
103 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
105 TestFunctional/parallel/SSHCmd 0
106 TestFunctional/parallel/CpCmd 0
108 TestFunctional/parallel/FileSync 0
109 TestFunctional/parallel/CertSync 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/ImageCommands 0
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0
126 TestGvisorAddon 0
127 TestMultiControlPlane 0
135 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
162 TestKicCustomNetwork 0
163 TestKicExistingNetwork 0
164 TestKicCustomSubnet 0
165 TestKicStaticIP 0
168 TestMountStart 0
169 TestMultiNode 0
170 TestNetworkPlugins 0
171 TestNoKubernetes 0
172 TestChangeNoneUser 0
183 TestPreload 0
184 TestScheduledStopWindows 0
185 TestScheduledStopUnix 0
186 TestSkaffold 0
189 TestStartStop/group/old-k8s-version 0.13
190 TestStartStop/group/newest-cni 0.13
191 TestStartStop/group/default-k8s-diff-port 0.12
192 TestStartStop/group/no-preload 0.13
193 TestStartStop/group/disable-driver-mounts 0.13
194 TestStartStop/group/embed-certs 0.13
195 TestInsufficientStorage 0
202 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:198: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:978: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.13s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard