Test Report: none_Linux 19531

                    
                      cca1ca437c91fbc205ce13fbbdef95295053f0ce:2024-08-29:35997
                    
                

Test fail (1/167)

Order failed test Duration
33 TestAddons/parallel/Registry 71.98
x
+
TestAddons/parallel/Registry (71.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.064274ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-6sw4j" [3053c949-3bc4-4299-9eb4-e8a861711ddc] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003800172s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rdsp6" [78555fec-eab4-4bdb-b2dc-7d3440c326e6] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003673346s
addons_test.go:342: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.092279165s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/08/29 18:19:08 [DEBUG] GET http://10.154.0.4:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:35115               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:06 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:09 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.33.1 | 29 Aug 24 18:09 UTC | 29 Aug 24 18:09 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:06:20
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:06:20.523236  125910 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:06:20.523519  125910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:06:20.523530  125910 out.go:358] Setting ErrFile to fd 2...
	I0829 18:06:20.523534  125910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:06:20.523765  125910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-115450/.minikube/bin
	I0829 18:06:20.524440  125910 out.go:352] Setting JSON to false
	I0829 18:06:20.525463  125910 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6527,"bootTime":1724948254,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:06:20.525535  125910 start.go:139] virtualization: kvm guest
	I0829 18:06:20.528271  125910 out.go:177] * minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0829 18:06:20.529873  125910 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19531-115450/.minikube/cache/preloaded-tarball: no such file or directory
	I0829 18:06:20.529898  125910 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:06:20.529950  125910 notify.go:220] Checking for updates...
	I0829 18:06:20.532879  125910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:06:20.534577  125910 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-115450/kubeconfig
	I0829 18:06:20.535911  125910 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-115450/.minikube
	I0829 18:06:20.537267  125910 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:06:20.538631  125910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:06:20.540401  125910 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:06:20.552521  125910 out.go:177] * Using the none driver based on user configuration
	I0829 18:06:20.554112  125910 start.go:297] selected driver: none
	I0829 18:06:20.554140  125910 start.go:901] validating driver "none" against <nil>
	I0829 18:06:20.554159  125910 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:06:20.554204  125910 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0829 18:06:20.554653  125910 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0829 18:06:20.555507  125910 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:06:20.555923  125910 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:06:20.556059  125910 cni.go:84] Creating CNI manager for ""
	I0829 18:06:20.556086  125910 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:06:20.556104  125910 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:06:20.556185  125910 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:06:20.558050  125910 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0829 18:06:20.559834  125910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/config.json ...
	I0829 18:06:20.559888  125910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/config.json: {Name:mk52c07a5ee87218f366ef542ada424cad6e59b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:20.560100  125910 start.go:360] acquireMachinesLock for minikube: {Name:mk6427cc4de294c6e0dea91be3c08042a0348605 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:06:20.560147  125910 start.go:364] duration metric: took 26.32µs to acquireMachinesLock for "minikube"
	I0829 18:06:20.560165  125910 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 18:06:20.560264  125910 start.go:125] createHost starting for "" (driver="none")
	I0829 18:06:20.562161  125910 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0829 18:06:20.563575  125910 exec_runner.go:51] Run: systemctl --version
	I0829 18:06:20.566704  125910 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0829 18:06:20.566772  125910 client.go:168] LocalClient.Create starting
	I0829 18:06:20.566845  125910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-115450/.minikube/certs/ca.pem
	I0829 18:06:20.566888  125910 main.go:141] libmachine: Decoding PEM data...
	I0829 18:06:20.566906  125910 main.go:141] libmachine: Parsing certificate...
	I0829 18:06:20.566996  125910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-115450/.minikube/certs/cert.pem
	I0829 18:06:20.567027  125910 main.go:141] libmachine: Decoding PEM data...
	I0829 18:06:20.567041  125910 main.go:141] libmachine: Parsing certificate...
	I0829 18:06:20.567534  125910 client.go:171] duration metric: took 749.32µs to LocalClient.Create
	I0829 18:06:20.567574  125910 start.go:167] duration metric: took 874.875µs to libmachine.API.Create "minikube"
	I0829 18:06:20.567584  125910 start.go:293] postStartSetup for "minikube" (driver="none")
	I0829 18:06:20.567675  125910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:06:20.567729  125910 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:06:20.578120  125910 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0829 18:06:20.578147  125910 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0829 18:06:20.578157  125910 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0829 18:06:20.580317  125910 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0829 18:06:20.581875  125910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-115450/.minikube/addons for local assets ...
	I0829 18:06:20.581960  125910 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-115450/.minikube/files for local assets ...
	I0829 18:06:20.581984  125910 start.go:296] duration metric: took 14.389575ms for postStartSetup
	I0829 18:06:20.582677  125910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/config.json ...
	I0829 18:06:20.582824  125910 start.go:128] duration metric: took 22.547559ms to createHost
	I0829 18:06:20.582848  125910 start.go:83] releasing machines lock for "minikube", held for 22.692254ms
	I0829 18:06:20.583226  125910 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0829 18:06:20.583339  125910 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0829 18:06:20.586101  125910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:06:20.586181  125910 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:06:20.597455  125910 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0829 18:06:20.597504  125910 start.go:495] detecting cgroup driver to use...
	I0829 18:06:20.597538  125910 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0829 18:06:20.598099  125910 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:06:20.618267  125910 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0829 18:06:20.628472  125910 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0829 18:06:20.638762  125910 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0829 18:06:20.638827  125910 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0829 18:06:20.648203  125910 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 18:06:20.657319  125910 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0829 18:06:20.667314  125910 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 18:06:20.676888  125910 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:06:20.685875  125910 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0829 18:06:20.695197  125910 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0829 18:06:20.705320  125910 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0829 18:06:20.714651  125910 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:06:20.723380  125910 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:06:20.731663  125910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0829 18:06:20.961953  125910 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0829 18:06:21.083259  125910 start.go:495] detecting cgroup driver to use...
	I0829 18:06:21.083321  125910 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0829 18:06:21.083429  125910 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:06:21.104955  125910 exec_runner.go:51] Run: which cri-dockerd
	I0829 18:06:21.106211  125910 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0829 18:06:21.115348  125910 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0829 18:06:21.115380  125910 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0829 18:06:21.115424  125910 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0829 18:06:21.125322  125910 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0829 18:06:21.125550  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1535440705 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0829 18:06:21.136592  125910 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0829 18:06:21.362643  125910 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0829 18:06:21.590793  125910 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0829 18:06:21.590951  125910 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0829 18:06:21.590964  125910 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0829 18:06:21.591001  125910 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0829 18:06:21.600029  125910 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0829 18:06:21.600192  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube84693884 /etc/docker/daemon.json
	I0829 18:06:21.608608  125910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0829 18:06:21.844928  125910 exec_runner.go:51] Run: sudo systemctl restart docker
	I0829 18:06:22.272379  125910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0829 18:06:22.283993  125910 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0829 18:06:22.302010  125910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 18:06:22.313602  125910 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0829 18:06:22.555024  125910 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0829 18:06:22.801968  125910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0829 18:06:23.035946  125910 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0829 18:06:23.052207  125910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 18:06:23.063913  125910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0829 18:06:23.284472  125910 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0829 18:06:23.357630  125910 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0829 18:06:23.357712  125910 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0829 18:06:23.359098  125910 start.go:563] Will wait 60s for crictl version
	I0829 18:06:23.359148  125910 exec_runner.go:51] Run: which crictl
	I0829 18:06:23.360159  125910 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0829 18:06:23.390456  125910 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0829 18:06:23.390543  125910 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0829 18:06:23.414038  125910 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0829 18:06:23.439581  125910 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0829 18:06:23.439699  125910 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0829 18:06:23.442789  125910 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0829 18:06:23.444125  125910 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:06:23.444258  125910 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:06:23.444274  125910 kubeadm.go:934] updating node { 10.154.0.4 8443 v1.31.0 docker true true} ...
	I0829 18:06:23.444386  125910 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-9 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.154.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0829 18:06:23.444444  125910 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0829 18:06:23.497668  125910 cni.go:84] Creating CNI manager for ""
	I0829 18:06:23.497694  125910 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:06:23.497709  125910 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:06:23.497731  125910 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.154.0.4 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-9 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.154.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.154.0.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:06:23.497874  125910 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.154.0.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-9"
	  kubeletExtraArgs:
	    node-ip: 10.154.0.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.154.0.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:06:23.497936  125910 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:06:23.506588  125910 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0829 18:06:23.506653  125910 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0829 18:06:23.514954  125910 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0829 18:06:23.514974  125910 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0829 18:06:23.515002  125910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:06:23.515030  125910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-115450/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0829 18:06:23.514955  125910 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0829 18:06:23.515172  125910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-115450/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0829 18:06:23.526910  125910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-115450/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0829 18:06:23.565543  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube250635978 /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 18:06:23.567189  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1563958918 /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 18:06:23.598365  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2845998386 /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 18:06:23.666819  125910 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 18:06:23.675988  125910 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0829 18:06:23.676015  125910 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0829 18:06:23.676068  125910 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0829 18:06:23.684083  125910 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0829 18:06:23.684254  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2203509941 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0829 18:06:23.692702  125910 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0829 18:06:23.692727  125910 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0829 18:06:23.692769  125910 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0829 18:06:23.701115  125910 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:06:23.701283  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3958810139 /lib/systemd/system/kubelet.service
	I0829 18:06:23.709543  125910 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0829 18:06:23.709697  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3549669739 /var/tmp/minikube/kubeadm.yaml.new
	I0829 18:06:23.718130  125910 exec_runner.go:51] Run: grep 10.154.0.4	control-plane.minikube.internal$ /etc/hosts
	I0829 18:06:23.719455  125910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0829 18:06:23.942004  125910 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0829 18:06:23.957256  125910 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube for IP: 10.154.0.4
	I0829 18:06:23.957289  125910 certs.go:194] generating shared ca certs ...
	I0829 18:06:23.957316  125910 certs.go:226] acquiring lock for ca certs: {Name:mkf8ec3ca11c45bd7e41744baf96c75af9f37afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:23.957506  125910 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-115450/.minikube/ca.key
	I0829 18:06:23.957565  125910 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-115450/.minikube/proxy-client-ca.key
	I0829 18:06:23.957583  125910 certs.go:256] generating profile certs ...
	I0829 18:06:23.957665  125910 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/client.key
	I0829 18:06:23.957685  125910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/client.crt with IP's: []
	I0829 18:06:24.097017  125910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/client.crt ...
	I0829 18:06:24.097056  125910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/client.crt: {Name:mkfe7eb0d496cc86c0a43a42ee58175976d75a61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:24.097239  125910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/client.key ...
	I0829 18:06:24.097256  125910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/client.key: {Name:mkb5968a1ff5e22c78a318fbe25e7cb2d7e2eb7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:24.097352  125910 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/apiserver.key.1b9420d6
	I0829 18:06:24.097375  125910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/apiserver.crt.1b9420d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.154.0.4]
	I0829 18:06:24.173342  125910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/apiserver.crt.1b9420d6 ...
	I0829 18:06:24.173382  125910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/apiserver.crt.1b9420d6: {Name:mk84a3d589c667e5d652d128f086b755db103361 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:24.173547  125910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/apiserver.key.1b9420d6 ...
	I0829 18:06:24.173573  125910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/apiserver.key.1b9420d6: {Name:mk88b18be3e14d38ab9764cc92b614deae4929ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:24.173657  125910 certs.go:381] copying /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/apiserver.crt.1b9420d6 -> /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/apiserver.crt
	I0829 18:06:24.173758  125910 certs.go:385] copying /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/apiserver.key.1b9420d6 -> /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/apiserver.key
	I0829 18:06:24.173841  125910 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/proxy-client.key
	I0829 18:06:24.173864  125910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0829 18:06:24.368417  125910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/proxy-client.crt ...
	I0829 18:06:24.368453  125910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/proxy-client.crt: {Name:mk4adcfdce05ecb5954ec7a4cb1287ff204627ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:24.368621  125910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/proxy-client.key ...
	I0829 18:06:24.368641  125910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/proxy-client.key: {Name:mk08772f4401464e315411383bd1b920c4ac288d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:24.368820  125910 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-115450/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 18:06:24.368870  125910 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-115450/.minikube/certs/ca.pem (1082 bytes)
	I0829 18:06:24.368904  125910 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-115450/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:06:24.368937  125910 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-115450/.minikube/certs/key.pem (1675 bytes)
	I0829 18:06:24.369604  125910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-115450/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:06:24.369760  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2450269673 /var/lib/minikube/certs/ca.crt
	I0829 18:06:24.380085  125910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-115450/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 18:06:24.380224  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube85914346 /var/lib/minikube/certs/ca.key
	I0829 18:06:24.388671  125910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-115450/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:06:24.388811  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3571935976 /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 18:06:24.397416  125910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-115450/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0829 18:06:24.397548  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4270742366 /var/lib/minikube/certs/proxy-client-ca.key
	I0829 18:06:24.406214  125910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0829 18:06:24.406360  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2734455315 /var/lib/minikube/certs/apiserver.crt
	I0829 18:06:24.415125  125910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 18:06:24.415281  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3813895152 /var/lib/minikube/certs/apiserver.key
	I0829 18:06:24.424316  125910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:06:24.424455  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1285934481 /var/lib/minikube/certs/proxy-client.crt
	I0829 18:06:24.432685  125910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-115450/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 18:06:24.432842  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3872394090 /var/lib/minikube/certs/proxy-client.key
	I0829 18:06:24.441047  125910 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0829 18:06:24.441070  125910 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:24.441118  125910 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:24.450139  125910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-115450/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:06:24.450299  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1994071538 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:24.458995  125910 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:06:24.459174  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube285726040 /var/lib/minikube/kubeconfig
	I0829 18:06:24.467782  125910 exec_runner.go:51] Run: openssl version
	I0829 18:06:24.470613  125910 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:06:24.479688  125910 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:24.481057  125910 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:24.481115  125910 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:24.484000  125910 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:06:24.492064  125910 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:06:24.493229  125910 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:06:24.493272  125910 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:06:24.493385  125910 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0829 18:06:24.511140  125910 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:06:24.519891  125910 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:06:24.528006  125910 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0829 18:06:24.550531  125910 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:06:24.559326  125910 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:06:24.559346  125910 kubeadm.go:157] found existing configuration files:
	
	I0829 18:06:24.559393  125910 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:06:24.567350  125910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:06:24.567405  125910 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:06:24.575550  125910 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:06:24.583542  125910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:06:24.583633  125910 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:06:24.591397  125910 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:06:24.599542  125910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:06:24.599678  125910 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:06:24.607430  125910 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:06:24.615876  125910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:06:24.615930  125910 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:06:24.623533  125910 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 18:06:24.657042  125910 kubeadm.go:310] W0829 18:06:24.656905  126785 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:24.657508  125910 kubeadm.go:310] W0829 18:06:24.657458  126785 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:24.659098  125910 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:06:24.659144  125910 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:06:24.762798  125910 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:06:24.762847  125910 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:06:24.762854  125910 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:06:24.762860  125910 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:06:24.772892  125910 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:06:24.775696  125910 out.go:235]   - Generating certificates and keys ...
	I0829 18:06:24.775754  125910 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:06:24.775769  125910 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:06:24.933874  125910 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:06:25.119220  125910 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:06:25.290140  125910 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:06:25.496170  125910 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:06:25.624506  125910 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:06:25.624553  125910 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
	I0829 18:06:25.973828  125910 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:06:25.973913  125910 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
	I0829 18:06:26.057563  125910 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:06:26.257792  125910 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:06:26.340635  125910 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:06:26.340743  125910 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:06:26.490845  125910 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:06:26.879443  125910 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:06:26.923108  125910 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:06:27.041180  125910 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:06:27.335243  125910 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:06:27.335749  125910 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:06:27.338003  125910 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:06:27.340366  125910 out.go:235]   - Booting up control plane ...
	I0829 18:06:27.340400  125910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:06:27.340426  125910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:06:27.340435  125910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:06:27.362085  125910 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:06:27.366990  125910 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:06:27.367051  125910 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:06:27.607016  125910 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:06:27.607152  125910 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:06:28.108397  125910 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.187084ms
	I0829 18:06:28.108437  125910 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:06:33.110253  125910 kubeadm.go:310] [api-check] The API server is healthy after 5.001863838s
	I0829 18:06:33.121860  125910 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:06:33.136515  125910 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:06:33.158104  125910 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:06:33.158128  125910 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-9 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:06:33.167513  125910 kubeadm.go:310] [bootstrap-token] Using token: 4dtbea.wlk9sfejvttpd4tt
	I0829 18:06:33.169450  125910 out.go:235]   - Configuring RBAC rules ...
	I0829 18:06:33.169488  125910 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:06:33.173740  125910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:06:33.182631  125910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:06:33.186398  125910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:06:33.190033  125910 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:06:33.193726  125910 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:06:33.516081  125910 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:06:33.941643  125910 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:06:34.517097  125910 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:06:34.517904  125910 kubeadm.go:310] 
	I0829 18:06:34.517924  125910 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:06:34.517929  125910 kubeadm.go:310] 
	I0829 18:06:34.517934  125910 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:06:34.517938  125910 kubeadm.go:310] 
	I0829 18:06:34.517942  125910 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:06:34.517946  125910 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:06:34.517951  125910 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:06:34.517954  125910 kubeadm.go:310] 
	I0829 18:06:34.517958  125910 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:06:34.517962  125910 kubeadm.go:310] 
	I0829 18:06:34.517966  125910 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:06:34.517970  125910 kubeadm.go:310] 
	I0829 18:06:34.517973  125910 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:06:34.517977  125910 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:06:34.517982  125910 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:06:34.517985  125910 kubeadm.go:310] 
	I0829 18:06:34.517989  125910 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:06:34.517992  125910 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:06:34.517996  125910 kubeadm.go:310] 
	I0829 18:06:34.517999  125910 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4dtbea.wlk9sfejvttpd4tt \
	I0829 18:06:34.518010  125910 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:91862cdaaba96d8fac7147d7ffa8d01bb131e7c252a62aa27e915c5c23a7886c \
	I0829 18:06:34.518014  125910 kubeadm.go:310] 	--control-plane 
	I0829 18:06:34.518018  125910 kubeadm.go:310] 
	I0829 18:06:34.518022  125910 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:06:34.518039  125910 kubeadm.go:310] 
	I0829 18:06:34.518047  125910 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4dtbea.wlk9sfejvttpd4tt \
	I0829 18:06:34.518051  125910 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:91862cdaaba96d8fac7147d7ffa8d01bb131e7c252a62aa27e915c5c23a7886c 
	I0829 18:06:34.521226  125910 cni.go:84] Creating CNI manager for ""
	I0829 18:06:34.521258  125910 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:06:34.523468  125910 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 18:06:34.524826  125910 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0829 18:06:34.535669  125910 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 18:06:34.535829  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2305231665 /etc/cni/net.d/1-k8s.conflist
	I0829 18:06:34.545142  125910 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:06:34.545203  125910 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:34.545238  125910 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-9 minikube.k8s.io/updated_at=2024_08_29T18_06_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0829 18:06:34.554134  125910 ops.go:34] apiserver oom_adj: -16
	I0829 18:06:34.617898  125910 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:35.118820  125910 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:35.618855  125910 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:36.117894  125910 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:36.618508  125910 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:37.118444  125910 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:37.618777  125910 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:38.118811  125910 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:38.618889  125910 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:39.118752  125910 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:39.618613  125910 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:39.698763  125910 kubeadm.go:1113] duration metric: took 5.153610536s to wait for elevateKubeSystemPrivileges
	I0829 18:06:39.698807  125910 kubeadm.go:394] duration metric: took 15.205541065s to StartCluster
	I0829 18:06:39.698835  125910 settings.go:142] acquiring lock: {Name:mkbfc925e88f30f98d2914047ba39ce478afce2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:39.698907  125910 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-115450/kubeconfig
	I0829 18:06:39.700194  125910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-115450/kubeconfig: {Name:mk6922deb5af20027f9567795a53e115bbde746c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:39.700898  125910 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:06:39.701367  125910 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:06:39.701281  125910 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 18:06:39.701480  125910 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0829 18:06:39.701504  125910 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0829 18:06:39.701521  125910 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0829 18:06:39.701534  125910 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0829 18:06:39.701558  125910 host.go:66] Checking if "minikube" exists ...
	I0829 18:06:39.701605  125910 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0829 18:06:39.701634  125910 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0829 18:06:39.701480  125910 addons.go:69] Setting volcano=true in profile "minikube"
	I0829 18:06:39.701670  125910 host.go:66] Checking if "minikube" exists ...
	I0829 18:06:39.701680  125910 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0829 18:06:39.701696  125910 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0829 18:06:39.701723  125910 host.go:66] Checking if "minikube" exists ...
	I0829 18:06:39.701738  125910 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0829 18:06:39.701756  125910 addons.go:69] Setting yakd=true in profile "minikube"
	I0829 18:06:39.701766  125910 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0829 18:06:39.701781  125910 addons.go:234] Setting addon yakd=true in "minikube"
	I0829 18:06:39.701790  125910 host.go:66] Checking if "minikube" exists ...
	I0829 18:06:39.701851  125910 host.go:66] Checking if "minikube" exists ...
	I0829 18:06:39.701904  125910 addons.go:69] Setting registry=true in profile "minikube"
	I0829 18:06:39.701947  125910 addons.go:234] Setting addon registry=true in "minikube"
	I0829 18:06:39.701974  125910 host.go:66] Checking if "minikube" exists ...
	I0829 18:06:39.702422  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.702442  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.702491  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.702568  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.702583  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.702613  125910 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0829 18:06:39.702618  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.702636  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.702654  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.702693  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.702708  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.702719  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.702728  125910 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0829 18:06:39.702757  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.702764  125910 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0829 18:06:39.701671  125910 addons.go:234] Setting addon volcano=true in "minikube"
	I0829 18:06:39.703320  125910 host.go:66] Checking if "minikube" exists ...
	I0829 18:06:39.704119  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.704135  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.704169  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.702717  125910 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0829 18:06:39.704747  125910 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0829 18:06:39.704765  125910 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0829 18:06:39.704784  125910 mustload.go:65] Loading cluster: minikube
	I0829 18:06:39.704803  125910 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0829 18:06:39.704873  125910 host.go:66] Checking if "minikube" exists ...
	I0829 18:06:39.704894  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.704906  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.704940  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.704968  125910 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0829 18:06:39.705000  125910 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0829 18:06:39.705043  125910 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0829 18:06:39.705064  125910 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0829 18:06:39.705204  125910 host.go:66] Checking if "minikube" exists ...
	I0829 18:06:39.705384  125910 host.go:66] Checking if "minikube" exists ...
	I0829 18:06:39.705095  125910 host.go:66] Checking if "minikube" exists ...
	I0829 18:06:39.705798  125910 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:06:39.702582  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.706289  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.706340  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.706619  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.706681  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.706696  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.706712  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.706836  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.706884  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.706897  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.706941  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.707373  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.707396  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.707430  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.708646  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.708668  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.708724  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.708787  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.708799  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.708828  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.706763  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.715264  125910 out.go:177] * Configuring local host environment ...
	W0829 18:06:39.719533  125910 out.go:270] * 
	W0829 18:06:39.719566  125910 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0829 18:06:39.719574  125910 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0829 18:06:39.719583  125910 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0829 18:06:39.719590  125910 out.go:270] * 
	W0829 18:06:39.719705  125910 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0829 18:06:39.719740  125910 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0829 18:06:39.719748  125910 out.go:270] * 
	W0829 18:06:39.719874  125910 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0829 18:06:39.719892  125910 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0829 18:06:39.719899  125910 out.go:270] * 
	W0829 18:06:39.719914  125910 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0829 18:06:39.719952  125910 start.go:235] Will wait 6m0s for node &{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 18:06:39.721544  125910 out.go:177] * Verifying Kubernetes components...
	I0829 18:06:39.728862  125910 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0829 18:06:39.729452  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.729461  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.735220  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.735391  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.735745  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.738159  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.738924  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.738953  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.739384  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.750790  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.761203  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.761307  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.773988  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.791763  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.791798  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.791847  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.791859  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.791865  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.791940  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.792128  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.812765  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.814656  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.815041  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.815297  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.815346  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.816953  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.817307  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:39.817328  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:39.822757  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:39.826284  125910 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:06:39.827873  125910 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:39.827902  125910 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0829 18:06:39.827911  125910 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:39.827966  125910 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:39.831828  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.831911  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.839102  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.839185  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.839413  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:39.839447  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:39.844889  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:39.847086  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:39.847117  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:39.848632  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:39.848672  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:39.852360  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.852449  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.853266  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:39.853286  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:39.853536  125910 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 18:06:39.856449  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:39.856679  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:39.856695  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:39.857333  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:39.858610  125910 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0829 18:06:39.858723  125910 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0829 18:06:39.859725  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.859777  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.860021  125910 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:06:39.860060  125910 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 18:06:39.860213  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2155229483 /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:06:39.860630  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:39.861189  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.861488  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:39.865018  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.865079  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.865360  125910 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 18:06:39.867428  125910 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:06:39.867460  125910 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 18:06:39.867475  125910 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0829 18:06:39.867501  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0829 18:06:39.867634  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3988696218 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0829 18:06:39.867644  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4100625599 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:06:39.867809  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:06:39.863340  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:39.867849  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:39.867957  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1944112394 /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:39.868119  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.868203  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.868501  125910 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 18:06:39.869872  125910 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:39.869910  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 18:06:39.870068  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3558974901 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:39.870978  125910 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0829 18:06:39.872466  125910 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0829 18:06:39.872852  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:39.874532  125910 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0829 18:06:39.874573  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0829 18:06:39.875133  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4252441875 /etc/kubernetes/addons/volcano-deployment.yaml
	I0829 18:06:39.876896  125910 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 18:06:39.878570  125910 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:39.878607  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 18:06:39.878755  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1016850315 /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:39.881388  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.881476  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.883392  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:39.883423  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:39.883848  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:39.883871  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:39.887698  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.887776  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.889121  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:39.889707  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:39.891357  125910 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0829 18:06:39.891401  125910 host.go:66] Checking if "minikube" exists ...
	I0829 18:06:39.893762  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.893787  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.893832  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.893995  125910 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 18:06:39.895796  125910 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:06:39.895838  125910 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 18:06:39.895985  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4136049015 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:06:39.897246  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:39.897271  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:39.897895  125910 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:06:39.897925  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 18:06:39.898509  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2415418539 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:06:39.899890  125910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:39.906990  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.907067  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.907068  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:39.907102  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:39.908594  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:39.910287  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:39.910308  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:39.910657  125910 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 18:06:39.911660  125910 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:06:39.911701  125910 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 18:06:39.911865  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4292831230 /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:06:39.912941  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:39.915116  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:39.915462  125910 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0829 18:06:39.915474  125910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:39.915498  125910 host.go:66] Checking if "minikube" exists ...
	I0829 18:06:39.916238  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:39.916254  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:39.916290  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:39.918020  125910 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 18:06:39.919156  125910 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:06:39.919182  125910 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 18:06:39.919189  125910 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 18:06:39.919326  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube158545021 /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:06:39.920320  125910 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:06:39.920346  125910 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 18:06:39.920452  125910 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:06:39.920472  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube160325897 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:06:39.922435  125910 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 18:06:39.926492  125910 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 18:06:39.927717  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:39.927741  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:39.928187  125910 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0829 18:06:39.928226  125910 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0829 18:06:39.928361  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3515817089 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0829 18:06:39.929659  125910 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 18:06:39.930895  125910 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 18:06:39.931324  125910 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:06:39.931352  125910 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 18:06:39.931508  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3698007154 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:06:39.932746  125910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:39.933537  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:39.933659  125910 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 18:06:39.934834  125910 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 18:06:39.935525  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:39.935554  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:39.935989  125910 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 18:06:39.937149  125910 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 18:06:39.937215  125910 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:06:39.937238  125910 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 18:06:39.937390  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube859171850 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:06:39.938474  125910 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:06:39.938504  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 18:06:39.938678  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2764132657 /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:06:39.939846  125910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0829 18:06:39.941720  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:39.941763  125910 host.go:66] Checking if "minikube" exists ...
	I0829 18:06:39.942734  125910 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:06:39.942768  125910 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 18:06:39.942891  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2354888069 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:06:39.944216  125910 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:06:39.944259  125910 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 18:06:39.944375  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3436240903 /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:06:39.944552  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.944687  125910 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:39.944715  125910 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0829 18:06:39.944919  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3486344117 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:39.962213  125910 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 18:06:39.962836  125910 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 18:06:39.963011  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3224430724 /etc/kubernetes/addons/ig-role.yaml
	I0829 18:06:39.966071  125910 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:06:39.966108  125910 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 18:06:39.966259  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3427119839 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:06:39.968128  125910 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:39.968163  125910 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 18:06:39.968296  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1382350441 /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:39.968417  125910 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:06:39.968438  125910 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 18:06:39.968554  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube760141312 /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:06:39.968964  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:39.970272  125910 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:06:39.970299  125910 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 18:06:39.970443  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1192553788 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:06:39.983903  125910 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:06:39.983940  125910 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 18:06:39.984069  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3889121364 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:06:39.988757  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.988796  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:39.988833  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.988859  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:39.993279  125910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:39.993612  125910 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:06:39.993640  125910 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 18:06:39.993911  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1004408014 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:06:39.997005  125910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:39.997388  125910 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:39.997413  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 18:06:39.997528  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3661040669 /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:40.006034  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:40.006071  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:40.007585  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:40.007642  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:40.010392  125910 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:06:40.010427  125910 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 18:06:40.010574  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1520214835 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:06:40.013243  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:40.013289  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:40.013316  125910 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:40.013332  125910 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0829 18:06:40.013403  125910 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:40.013449  125910 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:40.014870  125910 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:06:40.014903  125910 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 18:06:40.015036  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube404306313 /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:06:40.015279  125910 out.go:177]   - Using image docker.io/busybox:stable
	I0829 18:06:40.016577  125910 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0829 18:06:40.017845  125910 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:40.017875  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 18:06:40.018006  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube772690685 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:40.021930  125910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:40.025967  125910 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:06:40.026013  125910 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 18:06:40.026150  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2469457207 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:06:40.041546  125910 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:06:40.041587  125910 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 18:06:40.041731  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3111337385 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:06:40.048426  125910 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:06:40.048599  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2838392090 /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:40.050060  125910 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:06:40.050095  125910 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 18:06:40.050232  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3903526356 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:06:40.058493  125910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:40.070255  125910 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:06:40.070296  125910 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 18:06:40.070439  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube667992707 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:06:40.074408  125910 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:06:40.074452  125910 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 18:06:40.074602  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2067132620 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:06:40.082142  125910 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:40.082186  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 18:06:40.082339  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3998763663 /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:40.096695  125910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:40.098978  125910 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:40.099013  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 18:06:40.099566  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3962101814 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:40.139358  125910 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:06:40.139396  125910 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 18:06:40.139544  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1219187001 /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:06:40.143875  125910 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:06:40.143912  125910 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 18:06:40.144067  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2674461133 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:06:40.166676  125910 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0829 18:06:40.173044  125910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:40.208822  125910 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-9" to be "Ready" ...
	I0829 18:06:40.215557  125910 node_ready.go:49] node "ubuntu-20-agent-9" has status "Ready":"True"
	I0829 18:06:40.215585  125910 node_ready.go:38] duration metric: took 6.709015ms for node "ubuntu-20-agent-9" to be "Ready" ...
	I0829 18:06:40.215600  125910 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:06:40.216920  125910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:40.219107  125910 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:40.219142  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 18:06:40.219345  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2649237320 /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:40.225327  125910 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-j7rv9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:40.265936  125910 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:06:40.265974  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 18:06:40.266102  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4197988146 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:06:40.309434  125910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:40.312887  125910 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:06:40.312931  125910 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 18:06:40.313088  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3387333812 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:06:40.450922  125910 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0829 18:06:40.461766  125910 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:06:40.461810  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 18:06:40.461989  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2003708190 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:06:40.563111  125910 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:06:40.563160  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 18:06:40.563316  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3111717427 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:06:40.742871  125910 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:06:40.742927  125910 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 18:06:40.743089  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3650190370 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:06:40.879531  125910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:06:40.954643  125910 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.039130441s)
	I0829 18:06:40.984766  125910 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0829 18:06:41.129474  125910 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.132409988s)
	I0829 18:06:41.184542  125910 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.19119329s)
	I0829 18:06:41.184588  125910 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0829 18:06:41.317004  125910 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.29400698s)
	I0829 18:06:41.320298  125910 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0829 18:06:41.336914  125910 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.162725078s)
	I0829 18:06:41.336974  125910 addons.go:475] Verifying addon registry=true in "minikube"
	I0829 18:06:41.341047  125910 out.go:177] * Verifying registry addon...
	I0829 18:06:41.359407  125910 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 18:06:41.364741  125910 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:06:41.364772  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:41.428886  125910 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.370324748s)
	I0829 18:06:41.752731  125910 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.443032689s)
	I0829 18:06:41.873557  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:42.045888  125910 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.827403212s)
	W0829 18:06:42.045941  125910 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:06:42.045968  125910 retry.go:31] will retry after 234.935195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:06:42.235662  125910 pod_ready.go:103] pod "coredns-6f6b679f8f-j7rv9" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:42.285671  125910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:42.365797  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:42.870904  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:43.098823  125910 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.158931102s)
	I0829 18:06:43.364375  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:43.553026  125910 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.673419191s)
	I0829 18:06:43.553062  125910 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0829 18:06:43.554833  125910 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 18:06:43.557041  125910 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 18:06:43.562579  125910 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:06:43.562608  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:43.600822  125910 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.31507906s)
	I0829 18:06:43.864049  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:44.062621  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:44.364180  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:44.562184  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:44.731854  125910 pod_ready.go:103] pod "coredns-6f6b679f8f-j7rv9" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:44.863138  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:45.062261  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:45.363977  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:45.561759  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:45.864411  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:46.062635  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:46.364600  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:46.562586  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:46.732086  125910 pod_ready.go:103] pod "coredns-6f6b679f8f-j7rv9" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:46.863871  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:46.989583  125910 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 18:06:46.989774  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1441990126 /var/lib/minikube/google_application_credentials.json
	I0829 18:06:47.002615  125910 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 18:06:47.002773  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4103308000 /var/lib/minikube/google_cloud_project
	I0829 18:06:47.014875  125910 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0829 18:06:47.014934  125910 host.go:66] Checking if "minikube" exists ...
	I0829 18:06:47.015461  125910 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0829 18:06:47.015480  125910 api_server.go:166] Checking apiserver status ...
	I0829 18:06:47.015507  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:47.034979  125910 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/127205/cgroup
	I0829 18:06:47.047721  125910 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb"
	I0829 18:06:47.047800  125910 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a7a90a102c7112f5b4c69c7885cc0c56884c19a83dd488b84da0a74a2b2526eb/freezer.state
	I0829 18:06:47.059960  125910 api_server.go:204] freezer state: "THAWED"
	I0829 18:06:47.059998  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:47.066547  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:47.067835  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:47.067923  125910 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 18:06:47.093227  125910 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:47.095360  125910 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 18:06:47.099291  125910 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:06:47.099348  125910 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 18:06:47.099489  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube65337844 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:06:47.111020  125910 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:06:47.111068  125910 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 18:06:47.111208  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube418966692 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:06:47.122928  125910 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:06:47.122966  125910 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 18:06:47.123094  125910 exec_runner.go:51] Run: sudo cp -a /tmp/minikube295852795 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:06:47.135081  125910 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:06:47.363955  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:47.565048  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:47.565172  125910 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0829 18:06:47.566793  125910 out.go:177] * Verifying gcp-auth addon...
	I0829 18:06:47.569048  125910 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 18:06:47.663919  125910 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:06:47.732369  125910 pod_ready.go:98] pod "coredns-6f6b679f8f-j7rv9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:47 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:39 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:39 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:39 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:39 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.154.0.4 HostIPs:[{IP:10.154.0.4}] P
odIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-29 18:06:39 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-29 18:06:40 +0000 UTC,FinishedAt:2024-08-29 18:06:46 +0000 UTC,ContainerID:docker://a745af9ea23f9a3463b656ad3a3bb83881f23597ea2f86d88ee63c02f8057287,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://a745af9ea23f9a3463b656ad3a3bb83881f23597ea2f86d88ee63c02f8057287 Started:0xc00156a470 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000229860} {Name:kube-api-access-4b5tp MountPath:/var/run/secrets/kubernetes.io/serviceaccount Rea
dOnly:true RecursiveReadOnly:0xc0002298d0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0829 18:06:47.732403  125910 pod_ready.go:82] duration metric: took 7.507041522s for pod "coredns-6f6b679f8f-j7rv9" in "kube-system" namespace to be "Ready" ...
	E0829 18:06:47.732419  125910 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-j7rv9" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:47 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:39 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:39 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:39 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:39 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.154.0.
4 HostIPs:[{IP:10.154.0.4}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-29 18:06:39 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-29 18:06:40 +0000 UTC,FinishedAt:2024-08-29 18:06:46 +0000 UTC,ContainerID:docker://a745af9ea23f9a3463b656ad3a3bb83881f23597ea2f86d88ee63c02f8057287,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://a745af9ea23f9a3463b656ad3a3bb83881f23597ea2f86d88ee63c02f8057287 Started:0xc00156a470 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000229860} {Name:kube-api-access-4b5tp MountPath:/var/run/secrets/kub
ernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0002298d0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0829 18:06:47.732439  125910 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-n8bwx" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:47.864248  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:48.062525  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:48.363744  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:48.563410  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:48.863899  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:49.062776  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:49.363457  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:49.562065  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:49.891006  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:49.894071  125910 pod_ready.go:93] pod "coredns-6f6b679f8f-n8bwx" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:49.894101  125910 pod_ready.go:82] duration metric: took 2.16165171s for pod "coredns-6f6b679f8f-n8bwx" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:49.894115  125910 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:49.943930  125910 pod_ready.go:93] pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:49.943958  125910 pod_ready.go:82] duration metric: took 49.834345ms for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:49.943979  125910 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:49.949459  125910 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:49.949483  125910 pod_ready.go:82] duration metric: took 5.495672ms for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:49.949494  125910 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:49.955581  125910 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:49.955637  125910 pod_ready.go:82] duration metric: took 6.134189ms for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:49.955654  125910 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2tmq9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:49.961117  125910 pod_ready.go:93] pod "kube-proxy-2tmq9" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:49.961140  125910 pod_ready.go:82] duration metric: took 5.479144ms for pod "kube-proxy-2tmq9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:49.961150  125910 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:50.061685  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:50.136326  125910 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:50.136361  125910 pod_ready.go:82] duration metric: took 175.201637ms for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:50.136370  125910 pod_ready.go:39] duration metric: took 9.92052386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:06:50.136390  125910 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:06:50.136461  125910 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:06:50.155749  125910 api_server.go:72] duration metric: took 10.435740906s to wait for apiserver process to appear ...
	I0829 18:06:50.155778  125910 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:06:50.155800  125910 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0829 18:06:50.159405  125910 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0829 18:06:50.160501  125910 api_server.go:141] control plane version: v1.31.0
	I0829 18:06:50.160560  125910 api_server.go:131] duration metric: took 4.750246ms to wait for apiserver health ...
	I0829 18:06:50.160577  125910 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:06:50.341189  125910 system_pods.go:59] 17 kube-system pods found
	I0829 18:06:50.341223  125910 system_pods.go:61] "coredns-6f6b679f8f-n8bwx" [3b41c9dd-26d9-421c-a69d-9b681230657b] Running
	I0829 18:06:50.341232  125910 system_pods.go:61] "csi-hostpath-attacher-0" [bbe74a4a-abd8-4cb8-bc37-c1c1a11c4d58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:06:50.341238  125910 system_pods.go:61] "csi-hostpath-resizer-0" [6ccc87f4-529e-4067-a707-02e81263ec5f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:06:50.341246  125910 system_pods.go:61] "csi-hostpathplugin-25fjq" [05e9fa01-8b08-439f-8837-5ab99e5365b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:06:50.341251  125910 system_pods.go:61] "etcd-ubuntu-20-agent-9" [04f79dbe-e2ec-40b1-87a9-3ef336bafc96] Running
	I0829 18:06:50.341259  125910 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-9" [5fc5a296-44a5-42b0-9834-71918d1b321a] Running
	I0829 18:06:50.341265  125910 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-9" [8dc98e0c-143f-43d8-99c1-b8e23c68d6ff] Running
	I0829 18:06:50.341270  125910 system_pods.go:61] "kube-proxy-2tmq9" [40a88934-c2a7-4158-9f4f-50df284880b1] Running
	I0829 18:06:50.341275  125910 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-9" [1ae5974b-50d5-4a31-8728-9482872e7da8] Running
	I0829 18:06:50.341289  125910 system_pods.go:61] "metrics-server-8988944d9-7vfbr" [8657419a-85d2-49d6-bf71-dbcabc1f5007] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:06:50.341303  125910 system_pods.go:61] "nvidia-device-plugin-daemonset-6kqmf" [0bc9b817-9548-41ae-8e30-647135c64203] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0829 18:06:50.341310  125910 system_pods.go:61] "registry-6fb4cdfc84-6sw4j" [3053c949-3bc4-4299-9eb4-e8a861711ddc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0829 18:06:50.341319  125910 system_pods.go:61] "registry-proxy-rdsp6" [78555fec-eab4-4bdb-b2dc-7d3440c326e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:06:50.341325  125910 system_pods.go:61] "snapshot-controller-56fcc65765-fnd25" [6a8dbcc0-e7d5-4a75-8837-9e8f0ad3b73b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:06:50.341334  125910 system_pods.go:61] "snapshot-controller-56fcc65765-hjspw" [d80a59c8-a5f1-4ffd-b791-a0b4a13a465b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:06:50.341338  125910 system_pods.go:61] "storage-provisioner" [3ba14357-047d-42bb-acaa-5406722627e7] Running
	I0829 18:06:50.341344  125910 system_pods.go:61] "tiller-deploy-b48cc5f79-6j94s" [dfd7cfb3-e540-4e6a-bf1b-cf252955d4ae] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0829 18:06:50.341382  125910 system_pods.go:74] duration metric: took 180.7921ms to wait for pod list to return data ...
	I0829 18:06:50.341398  125910 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:06:50.363133  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:50.536071  125910 default_sa.go:45] found service account: "default"
	I0829 18:06:50.536097  125910 default_sa.go:55] duration metric: took 194.690188ms for default service account to be created ...
	I0829 18:06:50.536108  125910 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:06:50.562377  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:50.743188  125910 system_pods.go:86] 17 kube-system pods found
	I0829 18:06:50.743235  125910 system_pods.go:89] "coredns-6f6b679f8f-n8bwx" [3b41c9dd-26d9-421c-a69d-9b681230657b] Running
	I0829 18:06:50.743251  125910 system_pods.go:89] "csi-hostpath-attacher-0" [bbe74a4a-abd8-4cb8-bc37-c1c1a11c4d58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:06:50.743260  125910 system_pods.go:89] "csi-hostpath-resizer-0" [6ccc87f4-529e-4067-a707-02e81263ec5f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:06:50.743271  125910 system_pods.go:89] "csi-hostpathplugin-25fjq" [05e9fa01-8b08-439f-8837-5ab99e5365b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:06:50.743279  125910 system_pods.go:89] "etcd-ubuntu-20-agent-9" [04f79dbe-e2ec-40b1-87a9-3ef336bafc96] Running
	I0829 18:06:50.743285  125910 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [5fc5a296-44a5-42b0-9834-71918d1b321a] Running
	I0829 18:06:50.743290  125910 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [8dc98e0c-143f-43d8-99c1-b8e23c68d6ff] Running
	I0829 18:06:50.743294  125910 system_pods.go:89] "kube-proxy-2tmq9" [40a88934-c2a7-4158-9f4f-50df284880b1] Running
	I0829 18:06:50.743299  125910 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [1ae5974b-50d5-4a31-8728-9482872e7da8] Running
	I0829 18:06:50.743311  125910 system_pods.go:89] "metrics-server-8988944d9-7vfbr" [8657419a-85d2-49d6-bf71-dbcabc1f5007] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:06:50.743321  125910 system_pods.go:89] "nvidia-device-plugin-daemonset-6kqmf" [0bc9b817-9548-41ae-8e30-647135c64203] Running
	I0829 18:06:50.743333  125910 system_pods.go:89] "registry-6fb4cdfc84-6sw4j" [3053c949-3bc4-4299-9eb4-e8a861711ddc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0829 18:06:50.743344  125910 system_pods.go:89] "registry-proxy-rdsp6" [78555fec-eab4-4bdb-b2dc-7d3440c326e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:06:50.743356  125910 system_pods.go:89] "snapshot-controller-56fcc65765-fnd25" [6a8dbcc0-e7d5-4a75-8837-9e8f0ad3b73b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:06:50.743365  125910 system_pods.go:89] "snapshot-controller-56fcc65765-hjspw" [d80a59c8-a5f1-4ffd-b791-a0b4a13a465b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:06:50.743372  125910 system_pods.go:89] "storage-provisioner" [3ba14357-047d-42bb-acaa-5406722627e7] Running
	I0829 18:06:50.743378  125910 system_pods.go:89] "tiller-deploy-b48cc5f79-6j94s" [dfd7cfb3-e540-4e6a-bf1b-cf252955d4ae] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0829 18:06:50.743390  125910 system_pods.go:126] duration metric: took 207.275463ms to wait for k8s-apps to be running ...
	I0829 18:06:50.743404  125910 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:06:50.743477  125910 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:06:50.755738  125910 system_svc.go:56] duration metric: took 12.321219ms WaitForService to wait for kubelet
	I0829 18:06:50.755778  125910 kubeadm.go:582] duration metric: took 11.035778814s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:06:50.755802  125910 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:06:50.863440  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:50.937651  125910 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0829 18:06:50.937686  125910 node_conditions.go:123] node cpu capacity is 8
	I0829 18:06:50.937703  125910 node_conditions.go:105] duration metric: took 181.894653ms to run NodePressure ...
	I0829 18:06:50.937719  125910 start.go:241] waiting for startup goroutines ...
	I0829 18:06:51.062004  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:51.363188  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:51.562062  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:51.863548  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:52.062947  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:52.363931  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:52.561865  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:52.898433  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:53.062363  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:53.363953  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:53.563000  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:53.862815  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:54.062352  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:54.364252  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:54.562397  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:54.863791  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:55.062371  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:55.363373  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:55.562649  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:55.864128  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:56.075759  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:56.363015  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:56.562017  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:56.864377  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:57.062582  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:57.363430  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:57.562482  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:57.863896  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:58.061353  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:58.366690  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:58.562215  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:58.863114  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:59.061933  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:59.362974  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:59.561803  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:59.864037  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:00.063023  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:00.363141  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:00.561484  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:00.863909  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:01.061425  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:01.363539  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:01.561661  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:01.864336  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:02.062151  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:02.364201  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:02.562836  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:02.863674  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.061710  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:03.364742  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.612911  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:03.863753  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.062244  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:04.363472  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.561135  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:04.863193  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.061790  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:05.363213  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.562249  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:05.863000  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.061898  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.362902  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.561898  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.863125  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.062383  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:07.363660  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.562412  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:07.864237  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:08.061857  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.363727  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:08.562779  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.864243  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:09.062351  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:09.364019  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:09.562017  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:09.863517  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:10.061692  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:10.363715  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:10.562639  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:10.864135  125910 kapi.go:107] duration metric: took 29.504728806s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 18:07:11.062271  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.562544  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.061724  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.561848  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.061494  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.563115  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.062329  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.561714  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.061958  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.561846  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.062812  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.562021  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:17.061989  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:17.562194  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:18.062121  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:18.562872  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:19.061690  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:19.579841  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:20.062146  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:20.562196  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:21.062191  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:21.561687  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:22.063310  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:22.562075  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:23.062186  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:23.563722  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:24.061126  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:24.562052  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:25.063050  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:25.561544  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:26.061656  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:26.561600  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:27.061596  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:27.562015  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:28.062363  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:28.563052  125910 kapi.go:107] duration metric: took 45.006009347s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 18:08:09.573089  125910 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:08:09.573112  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:10.073046  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:10.572667  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:11.073307  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:11.572708  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:12.073287  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:12.572508  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:13.072992  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:13.572758  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:14.073093  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:14.573437  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:15.072566  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:15.573282  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:16.074521  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:16.573028  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:17.072539  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:17.572876  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:18.073142  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:18.572219  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:19.072579  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:19.572784  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:20.072099  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:20.573247  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:21.072893  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:21.573068  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:22.072488  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:22.572873  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:23.073696  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:23.573592  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:24.072885  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:24.573668  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:25.073387  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:25.573083  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:26.072343  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:26.572814  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:27.073400  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:27.572518  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:28.072305  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:28.572510  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:29.073116  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:29.572372  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:30.072858  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:30.572806  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:31.073485  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:31.572450  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:32.073200  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:32.572673  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:33.073693  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:33.573260  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:34.072874  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:34.573267  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:35.072748  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:35.573386  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:36.072094  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:36.572218  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:37.072699  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:37.572931  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:38.072219  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:38.572532  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:39.073194  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:39.572080  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:40.073091  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:40.573469  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:41.073136  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:41.572703  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:42.073322  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:42.572428  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:43.073009  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:43.572438  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:44.072629  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:44.573095  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:45.072559  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:45.573367  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:46.072685  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:46.572990  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:47.072982  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:47.571999  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:48.073067  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:48.588550  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:49.072877  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:49.573155  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:50.072247  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:50.573317  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:51.073138  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:51.573772  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:52.073611  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:52.572875  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:53.073881  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:53.573299  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:54.072777  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:54.573420  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:55.073016  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:55.572970  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:56.072242  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:56.572410  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:57.073050  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:57.573369  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:58.072595  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:58.572667  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:59.073396  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:59.572505  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:00.072751  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:00.572401  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:01.072958  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:01.572078  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:02.072605  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:02.572958  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:03.072662  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:03.572625  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:04.072990  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:04.572745  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:05.073793  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:05.572913  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:06.073123  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:06.572378  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:07.072858  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:07.573660  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:08.072698  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:08.573040  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:09.072574  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:09.572602  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:10.073016  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:10.572876  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:11.073456  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:11.572766  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:12.073342  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:12.572477  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:13.073395  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:13.572896  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:14.073831  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:14.572893  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:15.073505  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:15.573795  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:16.073600  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:16.573107  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:17.072376  125910 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:17.572800  125910 kapi.go:107] duration metric: took 2m30.00375029s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 18:09:17.574579  125910 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0829 18:09:17.575904  125910 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 18:09:17.577097  125910 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 18:09:17.578586  125910 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, helm-tiller, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0829 18:09:17.579807  125910 addons.go:510] duration metric: took 2m37.878527905s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner helm-tiller metrics-server yakd storage-provisioner-rancher inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0829 18:09:17.579859  125910 start.go:246] waiting for cluster config update ...
	I0829 18:09:17.579884  125910 start.go:255] writing updated cluster config ...
	I0829 18:09:17.580157  125910 exec_runner.go:51] Run: rm -f paused
	I0829 18:09:17.626682  125910 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:09:17.628546  125910 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Fri 2024-08-23 01:12:15 UTC, end at Thu 2024-08-29 18:19:09 UTC. --
	Aug 29 18:13:16 ubuntu-20-agent-9 cri-dockerd[126455]: time="2024-08-29T18:13:16Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc"
	Aug 29 18:13:17 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:13:17.339759093Z" level=error msg="stream copy error: reading from a closed fifo"
	Aug 29 18:13:17 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:13:17.339758818Z" level=error msg="stream copy error: reading from a closed fifo"
	Aug 29 18:13:17 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:13:17.341935858Z" level=error msg="Error running exec 1f667fc1dc5fe39227a1f17f59883d68dd6025212c530d212ccc48be7c6842ac in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 29 18:13:17 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:13:17.554419204Z" level=info msg="ignoring event" container=e7ca3042f8eb57143a97b2a7b1497581dcccb37a2a9e4b743f3a4cac6f998394 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:15:52 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:15:52.140642055Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Aug 29 18:15:52 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:15:52.142862094Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Aug 29 18:18:08 ubuntu-20-agent-9 cri-dockerd[126455]: time="2024-08-29T18:18:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d5d7ab378d94291a29099aa8ca3b59f17bde666afc8c65e3f11b267223265d2a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Aug 29 18:18:09 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:18:09.183205928Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 29 18:18:09 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:18:09.185553540Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 29 18:18:23 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:18:23.137123621Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 29 18:18:23 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:18:23.139298857Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 29 18:18:30 ubuntu-20-agent-9 cri-dockerd[126455]: time="2024-08-29T18:18:30Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc"
	Aug 29 18:18:31 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:18:31.402427769Z" level=error msg="stream copy error: reading from a closed fifo"
	Aug 29 18:18:31 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:18:31.402529247Z" level=error msg="stream copy error: reading from a closed fifo"
	Aug 29 18:18:31 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:18:31.406672980Z" level=error msg="Error running exec 61b70a78c174eb3e1a2d3f4f4db019b11984c3545a93ccd6ec8440683cca4806 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Aug 29 18:18:31 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:18:31.502061525Z" level=info msg="ignoring event" container=68aef0dd151d772690b4a457009fb36124661612c6544ba63022db60eeb9f08a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:18:49 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:18:49.139214635Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 29 18:18:49 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:18:49.142032234Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 29 18:19:08 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:19:08.617522409Z" level=info msg="ignoring event" container=d5d7ab378d94291a29099aa8ca3b59f17bde666afc8c65e3f11b267223265d2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:08 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:19:08.891293273Z" level=info msg="ignoring event" container=aa89fd4faff03d8ba8b9f03249f3148ce806ad4183c4ed59a8ad6b3d2c138056 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:08 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:19:08.945023991Z" level=info msg="ignoring event" container=5ae6183530f3b9f93efe4289d298abac37444313f2103affe99ed3e338113204 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:09 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:19:09.042020243Z" level=info msg="ignoring event" container=66de007ae55c83675d75b80bcaee54b22ab170ab4ebf5d943ec38cfcd1899c28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:09 ubuntu-20-agent-9 cri-dockerd[126455]: time="2024-08-29T18:19:09Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-6fb4cdfc84-6sw4j_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Aug 29 18:19:09 ubuntu-20-agent-9 dockerd[126127]: time="2024-08-29T18:19:09.124623382Z" level=info msg="ignoring event" container=6d5cbc2442ae6117125843daddcf01b0615ca0c42aceb76907b0d835be19f9d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	68aef0dd151d7       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc                            39 seconds ago      Exited              gadget                                   7                   bb2cf6689b20d       gadget-5fmls
	1a6b797a43dd0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   81a5752f0c0a7       gcp-auth-89d5ffd79-xctgv
	479aedab334bf       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   12cfd1f026fff       csi-hostpathplugin-25fjq
	6f3dd95491156       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   12cfd1f026fff       csi-hostpathplugin-25fjq
	1936fa06e8802       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   12cfd1f026fff       csi-hostpathplugin-25fjq
	34382bb159fc4       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   12cfd1f026fff       csi-hostpathplugin-25fjq
	a3b4a5e114939       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   12cfd1f026fff       csi-hostpathplugin-25fjq
	678109605f165       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   8d1a163a7e2d4       csi-hostpath-resizer-0
	0634e2041625f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   12cfd1f026fff       csi-hostpathplugin-25fjq
	b1edceb9f6029       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   5c61bf8f6e5bb       csi-hostpath-attacher-0
	7b3ef7a4e4ab1       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   3c7bca5fba3dd       snapshot-controller-56fcc65765-hjspw
	65abfe0684eaf       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   22e0a75bdd9a8       snapshot-controller-56fcc65765-fnd25
	5ae6183530f3b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              12 minutes ago      Exited              registry-proxy                           0                   6d5cbc2442ae6       registry-proxy-rdsp6
	e7bf3dea96b92       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       12 minutes ago      Running             local-path-provisioner                   0                   9626ea6edf7e2       local-path-provisioner-86d989889c-jtsmr
	db315a25c2875       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        12 minutes ago      Running             yakd                                     0                   052f9494f51ff       yakd-dashboard-67d98fc6b-tdf4s
	aa89fd4faff03       registry@sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d                                                             12 minutes ago      Exited              registry                                 0                   66de007ae55c8       registry-6fb4cdfc84-6sw4j
	7e5264e42c78b       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  12 minutes ago      Running             tiller                                   0                   870e98044e6cf       tiller-deploy-b48cc5f79-6j94s
	9c76db9977036       registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a                        12 minutes ago      Running             metrics-server                           0                   09ff8bf1676b8       metrics-server-8988944d9-7vfbr
	a19b021b1ac12       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               12 minutes ago      Running             cloud-spanner-emulator                   0                   545f6fb705298       cloud-spanner-emulator-769b77f747-k7g2w
	b7bfd00ae56e6       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     12 minutes ago      Running             nvidia-device-plugin-ctr                 0                   8c05439ead692       nvidia-device-plugin-daemonset-6kqmf
	cc57a048026a9       6e38f40d628db                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   9285c51039fae       storage-provisioner
	95302aa9f3d65       cbb01a7bd410d                                                                                                                                12 minutes ago      Running             coredns                                  0                   04e67f31c0a30       coredns-6f6b679f8f-n8bwx
	5d9da03a2b591       ad83b2ca7b09e                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   47e97234c4d3f       kube-proxy-2tmq9
	3a7ba2927cacc       1766f54c897f0                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   b8d093bdb188c       kube-scheduler-ubuntu-20-agent-9
	f3de7cfd58fdf       2e96e5913fc06                                                                                                                                12 minutes ago      Running             etcd                                     0                   1639c8cf285b6       etcd-ubuntu-20-agent-9
	ff6ce253a6b64       045733566833c                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   156ba2ac2f4c5       kube-controller-manager-ubuntu-20-agent-9
	a7a90a102c711       604f5db92eaa8                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   6bf24b91541aa       kube-apiserver-ubuntu-20-agent-9
	
	
	==> coredns [95302aa9f3d6] <==
	[INFO] 10.244.0.12:43779 - 13840 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000149794s
	[INFO] 10.244.0.12:60329 - 47796 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00007625s
	[INFO] 10.244.0.12:60329 - 10153 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000095821s
	[INFO] 10.244.0.12:51317 - 27334 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000065512s
	[INFO] 10.244.0.12:51317 - 57541 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000092998s
	[INFO] 10.244.0.12:60395 - 23839 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000055662s
	[INFO] 10.244.0.12:60395 - 55781 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000046742s
	[INFO] 10.244.0.12:54557 - 12387 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000038491s
	[INFO] 10.244.0.12:54557 - 1633 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000076442s
	[INFO] 10.244.0.12:55852 - 21762 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000074239s
	[INFO] 10.244.0.12:55852 - 3356 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109059s
	[INFO] 10.244.0.24:37520 - 55283 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000335873s
	[INFO] 10.244.0.24:45406 - 50217 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000397109s
	[INFO] 10.244.0.24:44231 - 19351 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00010917s
	[INFO] 10.244.0.24:34761 - 28778 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000140124s
	[INFO] 10.244.0.24:53211 - 3651 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000078911s
	[INFO] 10.244.0.24:34417 - 56677 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000093948s
	[INFO] 10.244.0.24:44899 - 44860 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004138965s
	[INFO] 10.244.0.24:39406 - 15254 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004154319s
	[INFO] 10.244.0.24:33991 - 47560 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003445868s
	[INFO] 10.244.0.24:45272 - 8228 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003754567s
	[INFO] 10.244.0.24:54861 - 33078 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003252724s
	[INFO] 10.244.0.24:58084 - 36063 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003662061s
	[INFO] 10.244.0.24:35018 - 9172 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003193247s
	[INFO] 10.244.0.24:42661 - 45961 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.003336742s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-9
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-9
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_06_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-9
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-9"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:06:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-9
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:19:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:15:14 +0000   Thu, 29 Aug 2024 18:06:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:15:14 +0000   Thu, 29 Aug 2024 18:06:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:15:14 +0000   Thu, 29 Aug 2024 18:06:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:15:14 +0000   Thu, 29 Aug 2024 18:06:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.154.0.4
	  Hostname:    ubuntu-20-agent-9
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                4894487b-7b30-e033-3a9d-c6f45b6c4cf8
	  Boot ID:                    bdafbb7f-5cf1-499e-8ea4-8202709053d2
	  Kernel Version:             5.15.0-1067-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-769b77f747-k7g2w      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-5fmls                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-xctgv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-n8bwx                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-25fjq                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ubuntu-20-agent-9                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-ubuntu-20-agent-9             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-9    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-2tmq9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ubuntu-20-agent-9             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-8988944d9-7vfbr               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-6kqmf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-fnd25         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-hjspw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 tiller-deploy-b48cc5f79-6j94s                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-jtsmr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-tdf4s               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node ubuntu-20-agent-9 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node ubuntu-20-agent-9 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node ubuntu-20-agent-9 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node ubuntu-20-agent-9 event: Registered Node ubuntu-20-agent-9 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 e1 1e 1a 51 76 08 06
	[  +1.146880] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 21 ff bf 19 35 08 06
	[  +0.012912] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa ba ee 18 07 57 08 06
	[  +2.832149] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 1d ed 00 aa a0 08 06
	[  +2.091868] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c9 ab 09 f5 24 08 06
	[  +2.119562] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a 22 d7 ca eb 26 08 06
	[  +6.071183] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 1c da 1d fc aa 08 06
	[  +0.332150] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 21 d9 25 5f 09 08 06
	[  +0.304919] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 a3 b0 ee 7a 75 08 06
	[Aug29 18:08] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 83 05 2f d4 ff 08 06
	[  +0.043095] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 b2 96 98 e3 77 08 06
	[Aug29 18:09] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 0f fa a7 d7 59 08 06
	[  +0.000523] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9e b9 66 aa 28 33 08 06
	
	
	==> etcd [f3de7cfd58fd] <==
	{"level":"info","ts":"2024-08-29T18:06:30.589104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 82d4d36e40f9b4a elected leader 82d4d36e40f9b4a at term 2"}
	{"level":"info","ts":"2024-08-29T18:06:30.590044Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:06:30.590718Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T18:06:30.590718Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"82d4d36e40f9b4a","local-member-attributes":"{Name:ubuntu-20-agent-9 ClientURLs:[https://10.154.0.4:2379]}","request-path":"/0/members/82d4d36e40f9b4a/attributes","cluster-id":"7cf21852ad6c12ab","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T18:06:30.590746Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T18:06:30.591029Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T18:06:30.591053Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T18:06:30.591097Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7cf21852ad6c12ab","local-member-id":"82d4d36e40f9b4a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:06:30.591183Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:06:30.591212Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:06:30.591857Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T18:06:30.592048Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T18:06:30.592742Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.154.0.4:2379"}
	{"level":"info","ts":"2024-08-29T18:06:30.592866Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T18:06:49.724087Z","caller":"traceutil/trace.go:171","msg":"trace[466426090] transaction","detail":"{read_only:false; response_revision:881; number_of_response:1; }","duration":"125.165447ms","start":"2024-08-29T18:06:49.598903Z","end":"2024-08-29T18:06:49.724069Z","steps":["trace[466426090] 'process raft request'  (duration: 125.041126ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:06:49.888795Z","caller":"traceutil/trace.go:171","msg":"trace[1920123532] linearizableReadLoop","detail":"{readStateIndex:902; appliedIndex:899; }","duration":"154.770658ms","start":"2024-08-29T18:06:49.734001Z","end":"2024-08-29T18:06:49.888771Z","steps":["trace[1920123532] 'read index received'  (duration: 73.031791ms)","trace[1920123532] 'applied index is now lower than readState.Index'  (duration: 81.73806ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:06:49.888928Z","caller":"traceutil/trace.go:171","msg":"trace[1353950046] transaction","detail":"{read_only:false; response_revision:884; number_of_response:1; }","duration":"159.617273ms","start":"2024-08-29T18:06:49.729286Z","end":"2024-08-29T18:06:49.888903Z","steps":["trace[1353950046] 'process raft request'  (duration: 159.311001ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:06:49.889055Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.998762ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-6f6b679f8f-n8bwx\" ","response":"range_response_count:1 size:4913"}
	{"level":"info","ts":"2024-08-29T18:06:49.889122Z","caller":"traceutil/trace.go:171","msg":"trace[322342950] range","detail":"{range_begin:/registry/pods/kube-system/coredns-6f6b679f8f-n8bwx; range_end:; response_count:1; response_revision:884; }","duration":"155.114379ms","start":"2024-08-29T18:06:49.733995Z","end":"2024-08-29T18:06:49.889109Z","steps":["trace[322342950] 'agreement among raft nodes before linearized reading'  (duration: 154.893466ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:06:58.364342Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.459198ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11189916337604870127 > lease_revoke:<id:1b4a919f51473de0>","response":"size:27"}
	{"level":"warn","ts":"2024-08-29T18:07:14.173783Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.905499ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:07:14.173857Z","caller":"traceutil/trace.go:171","msg":"trace[136391589] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1009; }","duration":"103.00496ms","start":"2024-08-29T18:07:14.070838Z","end":"2024-08-29T18:07:14.173843Z","steps":["trace[136391589] 'range keys from in-memory index tree'  (duration: 102.848656ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:16:30.609097Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1729}
	{"level":"info","ts":"2024-08-29T18:16:30.633535Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1729,"took":"23.886807ms","hash":1751343354,"current-db-size-bytes":8499200,"current-db-size":"8.5 MB","current-db-size-in-use-bytes":4431872,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-08-29T18:16:30.633604Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1751343354,"revision":1729,"compact-revision":-1}
	
	
	==> gcp-auth [1a6b797a43dd] <==
	2024/08/29 18:09:16 GCP Auth Webhook started!
	2024/08/29 18:09:32 Ready to marshal response ...
	2024/08/29 18:09:32 Ready to write response ...
	2024/08/29 18:09:32 Ready to marshal response ...
	2024/08/29 18:09:32 Ready to write response ...
	2024/08/29 18:09:56 Ready to marshal response ...
	2024/08/29 18:09:56 Ready to write response ...
	2024/08/29 18:09:56 Ready to marshal response ...
	2024/08/29 18:09:56 Ready to write response ...
	2024/08/29 18:09:56 Ready to marshal response ...
	2024/08/29 18:09:56 Ready to write response ...
	2024/08/29 18:18:08 Ready to marshal response ...
	2024/08/29 18:18:08 Ready to write response ...
	
	
	==> kernel <==
	 18:19:09 up  2:01,  0 users,  load average: 0.32, 0.37, 0.41
	Linux ubuntu-20-agent-9 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [a7a90a102c71] <==
	W0829 18:07:51.501696       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.94.235:443: connect: connection refused
	W0829 18:07:52.570725       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.94.235:443: connect: connection refused
	W0829 18:07:53.580144       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.94.235:443: connect: connection refused
	W0829 18:08:09.539346       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.67.118:443: connect: connection refused
	E0829 18:08:09.539394       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.67.118:443: connect: connection refused" logger="UnhandledError"
	W0829 18:08:50.600298       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.67.118:443: connect: connection refused
	E0829 18:08:50.600339       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.67.118:443: connect: connection refused" logger="UnhandledError"
	W0829 18:08:50.609307       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.67.118:443: connect: connection refused
	E0829 18:08:50.609361       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.67.118:443: connect: connection refused" logger="UnhandledError"
	I0829 18:09:32.897122       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0829 18:09:32.914775       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0829 18:09:46.342461       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0829 18:09:46.352928       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0829 18:09:46.470269       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0829 18:09:46.472078       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0829 18:09:46.557364       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0829 18:09:46.680816       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0829 18:09:46.714348       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0829 18:09:47.370370       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0829 18:09:47.553446       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0829 18:09:47.558417       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0829 18:09:47.620016       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0829 18:09:47.697728       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0829 18:09:47.715047       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0829 18:09:47.905286       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [ff6ce253a6b6] <==
	W0829 18:17:59.418525       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:59.418571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:10.876709       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:10.876760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:17.589774       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:17.589820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:18.963958       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:18.964004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:25.917079       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:25.917131       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:26.754604       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:26.754654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:46.765783       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:46.765845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:51.432598       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:51.432642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:51.819798       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:51.819842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:57.511372       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:57.511423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:04.492120       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:04.492170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:19:08.852982       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="9.641µs"
	W0829 18:19:09.395908       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:09.395952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [5d9da03a2b59] <==
	I0829 18:06:39.413873       1 server_linux.go:66] "Using iptables proxy"
	I0829 18:06:39.531750       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.154.0.4"]
	E0829 18:06:39.531818       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:06:39.551755       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0829 18:06:39.551825       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:06:39.553863       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:06:39.554265       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:06:39.554284       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:06:39.556071       1 config.go:197] "Starting service config controller"
	I0829 18:06:39.556236       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:06:39.556268       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:06:39.556349       1 config.go:326] "Starting node config controller"
	I0829 18:06:39.556365       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:06:39.556118       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:06:39.656464       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 18:06:39.656546       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:06:39.656555       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3a7ba2927cac] <==
	W0829 18:06:31.512657       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0829 18:06:31.512685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:31.512649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:06:31.512719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:32.344437       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 18:06:32.344482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:32.417627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 18:06:32.417668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:32.473073       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:32.473120       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:32.526091       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:32.526140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:32.649233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0829 18:06:32.649285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:32.684816       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:06:32.684862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:32.706193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:32.706252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:32.813404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0829 18:06:32.813434       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 18:06:32.813473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0829 18:06:32.813514       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:32.985988       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:06:32.986036       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 18:06:35.409436       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Fri 2024-08-23 01:12:15 UTC, end at Thu 2024-08-29 18:19:09 UTC. --
	Aug 29 18:18:37 ubuntu-20-agent-9 kubelet[127361]: I0829 18:18:37.092033  127361 scope.go:117] "RemoveContainer" containerID="68aef0dd151d772690b4a457009fb36124661612c6544ba63022db60eeb9f08a"
	Aug 29 18:18:37 ubuntu-20-agent-9 kubelet[127361]: E0829 18:18:37.092225  127361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-5fmls_gadget(4a90995b-0ba2-4a2e-88ec-81d2c81c36a6)\"" pod="gadget/gadget-5fmls" podUID="4a90995b-0ba2-4a2e-88ec-81d2c81c36a6"
	Aug 29 18:18:40 ubuntu-20-agent-9 kubelet[127361]: E0829 18:18:40.984354  127361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0a4e6e65-271c-42f2-9fc8-bcef7c335f65"
	Aug 29 18:18:47 ubuntu-20-agent-9 kubelet[127361]: I0829 18:18:47.983468  127361 scope.go:117] "RemoveContainer" containerID="68aef0dd151d772690b4a457009fb36124661612c6544ba63022db60eeb9f08a"
	Aug 29 18:18:47 ubuntu-20-agent-9 kubelet[127361]: E0829 18:18:47.983795  127361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-5fmls_gadget(4a90995b-0ba2-4a2e-88ec-81d2c81c36a6)\"" pod="gadget/gadget-5fmls" podUID="4a90995b-0ba2-4a2e-88ec-81d2c81c36a6"
	Aug 29 18:18:49 ubuntu-20-agent-9 kubelet[127361]: E0829 18:18:49.142599  127361 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" image="gcr.io/k8s-minikube/busybox:latest"
	Aug 29 18:18:49 ubuntu-20-agent-9 kubelet[127361]: E0829 18:18:49.142800  127361 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:registry-test,Image:gcr.io/k8s-minikube/busybox,Command:[],Args:[sh -c wget --spider -S http://registry.kube-system.svc.cluster.local],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pppfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:true,StdinOnce:true,TTY:true,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod registry-test_default(b710e0e6-3294-4fa3-9823-85cf8f85e59b): ErrImagePull: Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" logger="UnhandledError"
	Aug 29 18:18:49 ubuntu-20-agent-9 kubelet[127361]: E0829 18:18:49.144001  127361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ErrImagePull: \"Error response from daemon: Head \\\"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\\\": unauthorized: authentication failed\"" pod="default/registry-test" podUID="b710e0e6-3294-4fa3-9823-85cf8f85e59b"
	Aug 29 18:18:55 ubuntu-20-agent-9 kubelet[127361]: E0829 18:18:55.985686  127361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0a4e6e65-271c-42f2-9fc8-bcef7c335f65"
	Aug 29 18:19:01 ubuntu-20-agent-9 kubelet[127361]: I0829 18:19:01.983206  127361 scope.go:117] "RemoveContainer" containerID="68aef0dd151d772690b4a457009fb36124661612c6544ba63022db60eeb9f08a"
	Aug 29 18:19:01 ubuntu-20-agent-9 kubelet[127361]: E0829 18:19:01.983395  127361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-5fmls_gadget(4a90995b-0ba2-4a2e-88ec-81d2c81c36a6)\"" pod="gadget/gadget-5fmls" podUID="4a90995b-0ba2-4a2e-88ec-81d2c81c36a6"
	Aug 29 18:19:04 ubuntu-20-agent-9 kubelet[127361]: E0829 18:19:04.985631  127361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="b710e0e6-3294-4fa3-9823-85cf8f85e59b"
	Aug 29 18:19:06 ubuntu-20-agent-9 kubelet[127361]: E0829 18:19:06.984487  127361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0a4e6e65-271c-42f2-9fc8-bcef7c335f65"
	Aug 29 18:19:08 ubuntu-20-agent-9 kubelet[127361]: I0829 18:19:08.761107  127361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pppfl\" (UniqueName: \"kubernetes.io/projected/b710e0e6-3294-4fa3-9823-85cf8f85e59b-kube-api-access-pppfl\") pod \"b710e0e6-3294-4fa3-9823-85cf8f85e59b\" (UID: \"b710e0e6-3294-4fa3-9823-85cf8f85e59b\") "
	Aug 29 18:19:08 ubuntu-20-agent-9 kubelet[127361]: I0829 18:19:08.761181  127361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b710e0e6-3294-4fa3-9823-85cf8f85e59b-gcp-creds\") pod \"b710e0e6-3294-4fa3-9823-85cf8f85e59b\" (UID: \"b710e0e6-3294-4fa3-9823-85cf8f85e59b\") "
	Aug 29 18:19:08 ubuntu-20-agent-9 kubelet[127361]: I0829 18:19:08.761234  127361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b710e0e6-3294-4fa3-9823-85cf8f85e59b-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b710e0e6-3294-4fa3-9823-85cf8f85e59b" (UID: "b710e0e6-3294-4fa3-9823-85cf8f85e59b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 29 18:19:08 ubuntu-20-agent-9 kubelet[127361]: I0829 18:19:08.761349  127361 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b710e0e6-3294-4fa3-9823-85cf8f85e59b-gcp-creds\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Aug 29 18:19:08 ubuntu-20-agent-9 kubelet[127361]: I0829 18:19:08.763130  127361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b710e0e6-3294-4fa3-9823-85cf8f85e59b-kube-api-access-pppfl" (OuterVolumeSpecName: "kube-api-access-pppfl") pod "b710e0e6-3294-4fa3-9823-85cf8f85e59b" (UID: "b710e0e6-3294-4fa3-9823-85cf8f85e59b"). InnerVolumeSpecName "kube-api-access-pppfl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:19:08 ubuntu-20-agent-9 kubelet[127361]: I0829 18:19:08.861804  127361 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pppfl\" (UniqueName: \"kubernetes.io/projected/b710e0e6-3294-4fa3-9823-85cf8f85e59b-kube-api-access-pppfl\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Aug 29 18:19:09 ubuntu-20-agent-9 kubelet[127361]: I0829 18:19:09.164187  127361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxdhv\" (UniqueName: \"kubernetes.io/projected/3053c949-3bc4-4299-9eb4-e8a861711ddc-kube-api-access-hxdhv\") pod \"3053c949-3bc4-4299-9eb4-e8a861711ddc\" (UID: \"3053c949-3bc4-4299-9eb4-e8a861711ddc\") "
	Aug 29 18:19:09 ubuntu-20-agent-9 kubelet[127361]: I0829 18:19:09.166439  127361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3053c949-3bc4-4299-9eb4-e8a861711ddc-kube-api-access-hxdhv" (OuterVolumeSpecName: "kube-api-access-hxdhv") pod "3053c949-3bc4-4299-9eb4-e8a861711ddc" (UID: "3053c949-3bc4-4299-9eb4-e8a861711ddc"). InnerVolumeSpecName "kube-api-access-hxdhv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:19:09 ubuntu-20-agent-9 kubelet[127361]: I0829 18:19:09.265104  127361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgrdp\" (UniqueName: \"kubernetes.io/projected/78555fec-eab4-4bdb-b2dc-7d3440c326e6-kube-api-access-cgrdp\") pod \"78555fec-eab4-4bdb-b2dc-7d3440c326e6\" (UID: \"78555fec-eab4-4bdb-b2dc-7d3440c326e6\") "
	Aug 29 18:19:09 ubuntu-20-agent-9 kubelet[127361]: I0829 18:19:09.265258  127361 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hxdhv\" (UniqueName: \"kubernetes.io/projected/3053c949-3bc4-4299-9eb4-e8a861711ddc-kube-api-access-hxdhv\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Aug 29 18:19:09 ubuntu-20-agent-9 kubelet[127361]: I0829 18:19:09.267068  127361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78555fec-eab4-4bdb-b2dc-7d3440c326e6-kube-api-access-cgrdp" (OuterVolumeSpecName: "kube-api-access-cgrdp") pod "78555fec-eab4-4bdb-b2dc-7d3440c326e6" (UID: "78555fec-eab4-4bdb-b2dc-7d3440c326e6"). InnerVolumeSpecName "kube-api-access-cgrdp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:19:09 ubuntu-20-agent-9 kubelet[127361]: I0829 18:19:09.366388  127361 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cgrdp\" (UniqueName: \"kubernetes.io/projected/78555fec-eab4-4bdb-b2dc-7d3440c326e6-kube-api-access-cgrdp\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	
	
	==> storage-provisioner [cc57a048026a] <==
	I0829 18:06:42.234094       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:06:42.248326       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:06:42.248382       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:06:42.260311       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:06:42.260549       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_59aa5ef2-40e4-495f-865b-73ead2f525f8!
	I0829 18:06:42.260754       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf30694d-b330-4e48-9930-1296a2d6c962", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-9_59aa5ef2-40e4-495f-865b-73ead2f525f8 became leader
	I0829 18:06:42.365495       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_59aa5ef2-40e4-495f-865b-73ead2f525f8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-9/10.154.0.4
	Start Time:       Thu, 29 Aug 2024 18:09:56 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-594r5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-594r5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-9
	  Warning  Failed     7m52s (x6 over 9m12s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7m38s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m37s (x4 over 9m13s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m37s (x4 over 9m13s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4m9s (x22 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.98s)

                                                
                                    

Test pass (105/167)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 3.06
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.0/json-events 1.5
15 TestDownloadOnly/v1.31.0/binaries 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.12
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.58
22 TestOffline 45.08
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 177.16
29 TestAddons/serial/Volcano 38.72
31 TestAddons/serial/GCPAuth/Namespaces 0.13
35 TestAddons/parallel/InspektorGadget 10.54
36 TestAddons/parallel/MetricsServer 5.41
37 TestAddons/parallel/HelmTiller 9.74
39 TestAddons/parallel/CSI 42.24
40 TestAddons/parallel/Headlamp 14.98
41 TestAddons/parallel/CloudSpanner 5.29
43 TestAddons/parallel/NvidiaDevicePlugin 6.24
44 TestAddons/parallel/Yakd 10.46
45 TestAddons/StoppedEnableDisable 10.71
47 TestCertExpiration 228.18
58 TestFunctional/serial/CopySyncFile 0
59 TestFunctional/serial/StartWithProxy 28.06
60 TestFunctional/serial/AuditLog 0
61 TestFunctional/serial/SoftStart 26.66
62 TestFunctional/serial/KubeContext 0.05
63 TestFunctional/serial/KubectlGetPods 0.07
65 TestFunctional/serial/MinikubeKubectlCmd 0.11
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
67 TestFunctional/serial/ExtraConfig 38.02
68 TestFunctional/serial/ComponentHealth 0.07
69 TestFunctional/serial/LogsCmd 0.87
70 TestFunctional/serial/LogsFileCmd 0.91
71 TestFunctional/serial/InvalidService 5.1
73 TestFunctional/parallel/ConfigCmd 0.3
74 TestFunctional/parallel/DashboardCmd 5.01
75 TestFunctional/parallel/DryRun 0.17
76 TestFunctional/parallel/InternationalLanguage 0.09
77 TestFunctional/parallel/StatusCmd 0.44
80 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
81 TestFunctional/parallel/ProfileCmd/profile_list 0.24
82 TestFunctional/parallel/ProfileCmd/profile_json_output 0.24
84 TestFunctional/parallel/ServiceCmd/DeployApp 10.16
85 TestFunctional/parallel/ServiceCmd/List 0.35
86 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
87 TestFunctional/parallel/ServiceCmd/HTTPS 0.17
88 TestFunctional/parallel/ServiceCmd/Format 0.16
89 TestFunctional/parallel/ServiceCmd/URL 0.17
90 TestFunctional/parallel/ServiceCmdConnect 7.33
91 TestFunctional/parallel/AddonsCmd 0.12
92 TestFunctional/parallel/PersistentVolumeClaim 21.22
105 TestFunctional/parallel/MySQL 21.14
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 13.32
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 14.63
114 TestFunctional/parallel/NodeLabels 0.07
118 TestFunctional/parallel/Version/short 0.05
119 TestFunctional/parallel/Version/components 0.25
120 TestFunctional/parallel/License 0.96
121 TestFunctional/delete_echo-server_images 0.03
122 TestFunctional/delete_my-image_image 0.02
123 TestFunctional/delete_minikube_cached_images 0.02
128 TestImageBuild/serial/Setup 15.04
129 TestImageBuild/serial/NormalBuild 2.94
130 TestImageBuild/serial/BuildWithBuildArg 0.98
131 TestImageBuild/serial/BuildWithDockerIgnore 0.79
132 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.79
136 TestJSONOutput/start/Command 29.79
137 TestJSONOutput/start/Audit 0
139 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
140 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
142 TestJSONOutput/pause/Command 0.52
143 TestJSONOutput/pause/Audit 0
145 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/unpause/Command 0.43
149 TestJSONOutput/unpause/Audit 0
151 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/stop/Command 10.46
155 TestJSONOutput/stop/Audit 0
157 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
159 TestErrorJSONOutput 0.21
164 TestMainNoArgs 0.05
165 TestMinikubeProfile 34.36
173 TestPause/serial/Start 27.79
174 TestPause/serial/SecondStartNoReconfiguration 31.36
175 TestPause/serial/Pause 0.56
176 TestPause/serial/VerifyStatus 0.14
177 TestPause/serial/Unpause 0.43
178 TestPause/serial/PauseAgain 0.54
179 TestPause/serial/DeletePaused 1.85
180 TestPause/serial/VerifyDeletedResources 0.07
194 TestRunningBinaryUpgrade 79.66
196 TestStoppedBinaryUpgrade/Setup 2.32
197 TestStoppedBinaryUpgrade/Upgrade 51.18
198 TestStoppedBinaryUpgrade/MinikubeLogs 0.84
199 TestKubernetesUpgrade 310.54
x
+
TestDownloadOnly/v1.20.0/json-events (3.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (3.059888967s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (3.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (65.592895ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:29
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:29.275980  122229 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:29.276097  122229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:29.276107  122229 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:29.276112  122229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:29.276357  122229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-115450/.minikube/bin
	W0829 18:05:29.276498  122229 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19531-115450/.minikube/config/config.json: open /home/jenkins/minikube-integration/19531-115450/.minikube/config/config.json: no such file or directory
	I0829 18:05:29.277109  122229 out.go:352] Setting JSON to true
	I0829 18:05:29.278149  122229 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6475,"bootTime":1724948254,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:05:29.278222  122229 start.go:139] virtualization: kvm guest
	I0829 18:05:29.280607  122229 out.go:97] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0829 18:05:29.280789  122229 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19531-115450/.minikube/cache/preloaded-tarball: no such file or directory
	I0829 18:05:29.280861  122229 notify.go:220] Checking for updates...
	I0829 18:05:29.282187  122229 out.go:169] MINIKUBE_LOCATION=19531
	I0829 18:05:29.283835  122229 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:29.285170  122229 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19531-115450/kubeconfig
	I0829 18:05:29.286388  122229 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-115450/.minikube
	I0829 18:05:29.287552  122229 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (1.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.5006104s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (1.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
--- PASS: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (70.264413ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:32
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:32.657007  122377 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:32.657103  122377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:32.657108  122377 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:32.657112  122377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:32.657324  122377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-115450/.minikube/bin
	I0829 18:05:32.657892  122377 out.go:352] Setting JSON to true
	I0829 18:05:32.658749  122377 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6479,"bootTime":1724948254,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:05:32.658814  122377 start.go:139] virtualization: kvm guest
	I0829 18:05:32.661110  122377 out.go:97] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0829 18:05:32.661277  122377 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19531-115450/.minikube/cache/preloaded-tarball: no such file or directory
	I0829 18:05:32.661342  122377 notify.go:220] Checking for updates...
	I0829 18:05:32.662736  122377 out.go:169] MINIKUBE_LOCATION=19531
	I0829 18:05:32.664343  122377 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:32.665924  122377 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19531-115450/kubeconfig
	I0829 18:05:32.667499  122377 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-115450/.minikube
	I0829 18:05:32.668865  122377 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:35115 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (45.08s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (43.372958195s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.708402203s)
--- PASS: TestOffline (45.08s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (51.229854ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (52.328551ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (177.16s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller: (2m57.156655816s)
--- PASS: TestAddons/Setup (177.16s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.72s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 10.503052ms
addons_test.go:905: volcano-admission stabilized in 10.523027ms
addons_test.go:913: volcano-controller stabilized in 10.582684ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-mx9cd" [be8736de-cecc-4a43-92b0-dec93ec9447d] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003725967s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-4lf9w" [638b75b7-a3a0-40bd-9190-dd4ba67ec34a] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003347119s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-sjmpx" [4ea422ec-bb02-4bec-a631-ebcb5b789902] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004409121s
addons_test.go:932: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [f197af5c-5ace-40cd-9d5b-72619141b3c2] Pending
helpers_test.go:344: "test-job-nginx-0" [f197af5c-5ace-40cd-9d5b-72619141b3c2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [f197af5c-5ace-40cd-9d5b-72619141b3c2] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004537613s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.373093277s)
--- PASS: TestAddons/serial/Volcano (38.72s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.54s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5fmls" [4a90995b-0ba2-4a2e-88ec-81d2c81c36a6] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004162363s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.533773315s)
--- PASS: TestAddons/parallel/InspektorGadget (10.54s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.231202ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-7vfbr" [8657419a-85d2-49d6-bf71-dbcabc1f5007] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004628916s
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.41s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.74s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.125232ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-6j94s" [dfd7cfb3-e540-4e6a-bf1b-cf252955d4ae] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004102266s
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.444006466s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.044432ms
addons_test.go:570: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [610efb26-319b-44c1-a381-9b74252931de] Pending
helpers_test.go:344: "task-pv-pod" [610efb26-319b-44c1-a381-9b74252931de] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [610efb26-319b-44c1-a381-9b74252931de] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004288675s
addons_test.go:590: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [21c2bd3a-1a51-459c-aab4-f707e0ea46ff] Pending
helpers_test.go:344: "task-pv-pod-restore" [21c2bd3a-1a51-459c-aab4-f707e0ea46ff] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [21c2bd3a-1a51-459c-aab4-f707e0ea46ff] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004086605s
addons_test.go:632: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.333189516s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-8mx24" [fcca59c4-9335-47b2-bc2d-00d4dfc0178f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-8mx24" [fcca59c4-9335-47b2-bc2d-00d4dfc0178f] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-8mx24" [fcca59c4-9335-47b2-bc2d-00d4dfc0178f] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004098344s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.453416624s)
--- PASS: TestAddons/parallel/Headlamp (14.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-k7g2w" [bb320d66-e2aa-4745-b476-c18f74144192] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004329436s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6kqmf" [0bc9b817-9548-41ae-8e30-647135c64203] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003399922s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-tdf4s" [4ac9f485-1b68-41da-ade5-23e5413e59a4] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0037712s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.455821125s)
--- PASS: TestAddons/parallel/Yakd (10.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.395796441s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.71s)

                                                
                                    
x
+
TestCertExpiration (228.18s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.430801427s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (31.991996299s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.755189136s)
--- PASS: TestCertExpiration (228.18s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19531-115450/.minikube/files/etc/test/nested/copy/122217/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (28.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (28.055375345s)
--- PASS: TestFunctional/serial/StartWithProxy (28.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.66s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (26.661247159s)
functional_test.go:663: soft start took 26.661987308s for "minikube" cluster.
--- PASS: TestFunctional/serial/SoftStart (26.66s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.023520246s)
functional_test.go:761: restart took 38.023642945s for "minikube" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.87s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd1243095085/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.91s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.1s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (171.13043ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL           |
	|-----------|-------------|-------------|-------------------------|
	| default   | invalid-svc |          80 | http://10.154.0.4:32764 |
	|-----------|-------------|-------------|-------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context minikube delete -f testdata/invalidsvc.yaml: (1.752552989s)
--- PASS: TestFunctional/serial/InvalidService (5.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (49.135536ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (46.139226ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/08/29 18:26:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 158003: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (83.557396ms)

                                                
                                                
-- stdout --
	* minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-115450/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-115450/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:26:40.779452  158356 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:26:40.779774  158356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:26:40.779785  158356 out.go:358] Setting ErrFile to fd 2...
	I0829 18:26:40.779789  158356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:26:40.779968  158356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-115450/.minikube/bin
	I0829 18:26:40.780517  158356 out.go:352] Setting JSON to false
	I0829 18:26:40.781554  158356 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7747,"bootTime":1724948254,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:26:40.781624  158356 start.go:139] virtualization: kvm guest
	I0829 18:26:40.783868  158356 out.go:177] * minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0829 18:26:40.785192  158356 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19531-115450/.minikube/cache/preloaded-tarball: no such file or directory
	I0829 18:26:40.785246  158356 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:26:40.785248  158356 notify.go:220] Checking for updates...
	I0829 18:26:40.787728  158356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:26:40.789068  158356 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-115450/kubeconfig
	I0829 18:26:40.790217  158356 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-115450/.minikube
	I0829 18:26:40.791396  158356 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:26:40.792515  158356 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:26:40.794197  158356 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:26:40.794512  158356 exec_runner.go:51] Run: systemctl --version
	I0829 18:26:40.797665  158356 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:26:40.810083  158356 out.go:177] * Using the none driver based on existing profile
	I0829 18:26:40.811480  158356 start.go:297] selected driver: none
	I0829 18:26:40.811500  158356 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:26:40.811651  158356 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:26:40.811683  158356 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0829 18:26:40.812042  158356 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0829 18:26:40.814028  158356 out.go:201] 
	W0829 18:26:40.815400  158356 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0829 18:26:40.816638  158356 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (87.270824ms)

                                                
                                                
-- stdout --
	* minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-115450/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-115450/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:26:40.949204  158386 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:26:40.949345  158386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:26:40.949358  158386 out.go:358] Setting ErrFile to fd 2...
	I0829 18:26:40.949365  158386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:26:40.949687  158386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-115450/.minikube/bin
	I0829 18:26:40.950253  158386 out.go:352] Setting JSON to false
	I0829 18:26:40.951268  158386 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7747,"bootTime":1724948254,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:26:40.951349  158386 start.go:139] virtualization: kvm guest
	I0829 18:26:40.953540  158386 out.go:177] * minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	W0829 18:26:40.955141  158386 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19531-115450/.minikube/cache/preloaded-tarball: no such file or directory
	I0829 18:26:40.955174  158386 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:26:40.955230  158386 notify.go:220] Checking for updates...
	I0829 18:26:40.957927  158386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:26:40.959749  158386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-115450/kubeconfig
	I0829 18:26:40.961074  158386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-115450/.minikube
	I0829 18:26:40.962353  158386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:26:40.963590  158386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:26:40.965308  158386 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:26:40.965660  158386 exec_runner.go:51] Run: systemctl --version
	I0829 18:26:40.968511  158386 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:26:40.980054  158386 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0829 18:26:40.981422  158386 start.go:297] selected driver: none
	I0829 18:26:40.981440  158386 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:26:40.981557  158386 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:26:40.981581  158386 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0829 18:26:40.981872  158386 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0829 18:26:40.983950  158386 out.go:201] 
	W0829 18:26:40.985144  158386 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0829 18:26:40.986405  158386 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "190.269254ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "50.185947ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "187.571213ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "49.619906ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-jwfrq" [ecbb581f-5e45-44aa-b9ae-b1b7ebed9ca5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-jwfrq" [ecbb581f-5e45-44aa-b9ae-b1b7ebed9ca5] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004085186s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "339.180742ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.154.0.4:31478
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.154.0.4:31478
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-vwr94" [403e852f-63e1-4cb2-b858-9cd9cdc1a611] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-vwr94" [403e852f-63e1-4cb2-b858-9cd9cdc1a611] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004351111s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.154.0.4:30186
functional_test.go:1675: http://10.154.0.4:30186: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-vwr94

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.154.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.154.0.4:30186
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.33s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [45a438aa-47a4-43e4-925c-56fc857c01bf] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004385413s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [18ec91d7-c2d9-4525-a424-e6f07826c2ac] Pending
helpers_test.go:344: "sp-pod" [18ec91d7-c2d9-4525-a424-e6f07826c2ac] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [18ec91d7-c2d9-4525-a424-e6f07826c2ac] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004243689s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b4612d6a-49a9-4be4-a091-b85589d3e448] Pending
helpers_test.go:344: "sp-pod" [b4612d6a-49a9-4be4-a091-b85589d3e448] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b4612d6a-49a9-4be4-a091-b85589d3e448] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003606499s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.22s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-6rh9t" [948f2eb9-cd5e-4587-8098-48765a22f1f7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-6rh9t" [948f2eb9-cd5e-4587-8098-48765a22f1f7] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.004105186s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-6rh9t -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-6rh9t -- mysql -ppassword -e "show databases;": exit status 1 (186.423907ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-6rh9t -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-6rh9t -- mysql -ppassword -e "show databases;": exit status 1 (118.577433ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-6rh9t -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.321580319s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (14.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.625655542s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (14.63s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.96s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (15.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (15.038946211s)
--- PASS: TestImageBuild/serial/Setup (15.04s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (2.9357586s)
--- PASS: TestImageBuild/serial/NormalBuild (2.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.79s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (29.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (29.789064254s)
--- PASS: TestJSONOutput/start/Command (29.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.46s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.463080541s)
--- PASS: TestJSONOutput/stop/Command (10.46s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (69.543411ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0b85445e-8a4d-4beb-b5e8-8451c8e2613e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1adc4d96-4118-407b-8f1f-1c9cd70c4497","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19531"}}
	{"specversion":"1.0","id":"b89f7819-301b-411b-a9a5-05fb0f3eb95e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4b040c7b-653c-4e79-b394-b2d888f29d4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19531-115450/kubeconfig"}}
	{"specversion":"1.0","id":"6fff3e7e-92e0-41e8-b905-62883162efa5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-115450/.minikube"}}
	{"specversion":"1.0","id":"e107794f-b838-4ece-8ff8-3b3842cea785","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f0c67af0-675b-4ff8-b703-a781fa1477dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"733b9cdd-f051-47b4-8ae9-a172a5ac9b29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (34.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.4567594s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.908945203s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.327600729s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (34.36s)

                                                
                                    
x
+
TestPause/serial/Start (27.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (27.793827586s)
--- PASS: TestPause/serial/Start (27.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.36s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (31.35665124s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.36s)

                                                
                                    
x
+
TestPause/serial/Pause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.56s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (136.996179ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.14s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.43s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.43s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.54s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.54s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.845109928s)
--- PASS: TestPause/serial/DeletePaused (1.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3519458961 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3519458961 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (36.392682064s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (37.39501574s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.159972267s)
--- PASS: TestRunningBinaryUpgrade (79.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (51.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2630709963 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2630709963 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.849964788s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2630709963 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2630709963 -p minikube stop: (23.765469725s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.567224014s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (51.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (310.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (32.632002246s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.874268585s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (75.879579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m16.464006324s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (83.552773ms)

                                                
                                                
-- stdout --
	* minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-115450/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-115450/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (17.987721998s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.363589119s)
--- PASS: TestKubernetesUpgrade (310.54s)

                                                
                                    

Test skip (61/167)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.0/preload-exists 0
14 TestDownloadOnly/v1.31.0/cached-images 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
38 TestAddons/parallel/Olm 0
42 TestAddons/parallel/LocalPath 0
46 TestCertOptions 0
48 TestDockerFlags 0
49 TestForceSystemdFlag 0
50 TestForceSystemdEnv 0
51 TestDockerEnvContainerd 0
52 TestKVMDriverInstallOrUpdate 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
55 TestErrorSpam 0
64 TestFunctional/serial/CacheCmd 0
78 TestFunctional/parallel/MountCmd 0
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
97 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
98 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
99 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
100 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
101 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
102 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
103 TestFunctional/parallel/SSHCmd 0
104 TestFunctional/parallel/CpCmd 0
106 TestFunctional/parallel/FileSync 0
107 TestFunctional/parallel/CertSync 0
112 TestFunctional/parallel/DockerEnv 0
113 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/ImageCommands 0
116 TestFunctional/parallel/NonActiveRuntimeDisabled 0
124 TestGvisorAddon 0
125 TestMultiControlPlane 0
133 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
160 TestKicCustomNetwork 0
161 TestKicExistingNetwork 0
162 TestKicCustomSubnet 0
163 TestKicStaticIP 0
166 TestMountStart 0
167 TestMultiNode 0
168 TestNetworkPlugins 0
169 TestNoKubernetes 0
170 TestChangeNoneUser 0
181 TestPreload 0
182 TestScheduledStopWindows 0
183 TestScheduledStopUnix 0
184 TestSkaffold 0
187 TestStartStop/group/old-k8s-version 0.14
188 TestStartStop/group/newest-cni 0.14
189 TestStartStop/group/default-k8s-diff-port 0.14
190 TestStartStop/group/no-preload 0.14
191 TestStartStop/group/disable-driver-mounts 0.14
192 TestStartStop/group/embed-certs 0.14
193 TestInsufficientStorage 0
200 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:198: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:978: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.14s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.14s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard