Test Report: none_Linux 19664

                    
                      b0eadc949d6b6708e1f550519f8385f72d7fe4f5:2024-09-19:36285
                    
                

Test fail (1/167)

Order failed test Duration
33 TestAddons/parallel/Registry 71.97
x
+
TestAddons/parallel/Registry (71.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.582807ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-bhm68" [d43e593d-3a8a-4415-b4cc-21dfcb93cfab] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003918257s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6wfzk" [d311bda0-57b4-46a9-a6b7-207e358dc87d] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003194879s
addons_test.go:342: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.086839253s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/19 18:54:04 [DEBUG] GET http://10.154.0.4:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:42313               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:40 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:40 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:44 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 19 Sep 24 18:44 UTC | 19 Sep 24 18:44 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 19 Sep 24 18:54 UTC | 19 Sep 24 18:54 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 19 Sep 24 18:54 UTC | 19 Sep 24 18:54 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:40:25
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:40:25.685460   18347 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:40:25.685733   18347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:40:25.685743   18347 out.go:358] Setting ErrFile to fd 2...
	I0919 18:40:25.685748   18347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:40:25.685971   18347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7688/.minikube/bin
	I0919 18:40:25.686602   18347 out.go:352] Setting JSON to false
	I0919 18:40:25.687477   18347 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1361,"bootTime":1726769865,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:40:25.687575   18347 start.go:139] virtualization: kvm guest
	I0919 18:40:25.689965   18347 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0919 18:40:25.691116   18347 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19664-7688/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 18:40:25.691164   18347 notify.go:220] Checking for updates...
	I0919 18:40:25.692383   18347 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 18:40:25.693711   18347 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:40:25.695118   18347 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7688/kubeconfig
	I0919 18:40:25.696420   18347 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7688/.minikube
	I0919 18:40:25.697648   18347 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 18:40:25.699311   18347 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:40:25.700826   18347 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:40:25.710930   18347 out.go:177] * Using the none driver based on user configuration
	I0919 18:40:25.712096   18347 start.go:297] selected driver: none
	I0919 18:40:25.712107   18347 start.go:901] validating driver "none" against <nil>
	I0919 18:40:25.712118   18347 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:40:25.712149   18347 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0919 18:40:25.712464   18347 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0919 18:40:25.713006   18347 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:40:25.713243   18347 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:40:25.713274   18347 cni.go:84] Creating CNI manager for ""
	I0919 18:40:25.713314   18347 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 18:40:25.713323   18347 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 18:40:25.713362   18347 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:40:25.714898   18347 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0919 18:40:25.716588   18347 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/config.json ...
	I0919 18:40:25.716620   18347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/config.json: {Name:mk95c75e9f5f228fb8c14616277ee703a0358d8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:25.716753   18347 start.go:360] acquireMachinesLock for minikube: {Name:mk32188cec835b150bb972c996c7dc4473a12385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 18:40:25.716786   18347 start.go:364] duration metric: took 20.156µs to acquireMachinesLock for "minikube"
	I0919 18:40:25.716798   18347 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 18:40:25.716881   18347 start.go:125] createHost starting for "" (driver="none")
	I0919 18:40:25.718370   18347 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0919 18:40:25.719592   18347 exec_runner.go:51] Run: systemctl --version
	I0919 18:40:25.722003   18347 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0919 18:40:25.722031   18347 client.go:168] LocalClient.Create starting
	I0919 18:40:25.722074   18347 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7688/.minikube/certs/ca.pem
	I0919 18:40:25.722111   18347 main.go:141] libmachine: Decoding PEM data...
	I0919 18:40:25.722133   18347 main.go:141] libmachine: Parsing certificate...
	I0919 18:40:25.722182   18347 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7688/.minikube/certs/cert.pem
	I0919 18:40:25.722199   18347 main.go:141] libmachine: Decoding PEM data...
	I0919 18:40:25.722213   18347 main.go:141] libmachine: Parsing certificate...
	I0919 18:40:25.722477   18347 client.go:171] duration metric: took 433.939µs to LocalClient.Create
	I0919 18:40:25.722498   18347 start.go:167] duration metric: took 497.346µs to libmachine.API.Create "minikube"
	I0919 18:40:25.722503   18347 start.go:293] postStartSetup for "minikube" (driver="none")
	I0919 18:40:25.722541   18347 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 18:40:25.722574   18347 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 18:40:25.731393   18347 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 18:40:25.731439   18347 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 18:40:25.731459   18347 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 18:40:25.733473   18347 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0919 18:40:25.734796   18347 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7688/.minikube/addons for local assets ...
	I0919 18:40:25.734874   18347 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7688/.minikube/files for local assets ...
	I0919 18:40:25.734903   18347 start.go:296] duration metric: took 12.391926ms for postStartSetup
	I0919 18:40:25.735770   18347 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/config.json ...
	I0919 18:40:25.736009   18347 start.go:128] duration metric: took 19.117616ms to createHost
	I0919 18:40:25.736028   18347 start.go:83] releasing machines lock for "minikube", held for 19.232857ms
	I0919 18:40:25.736391   18347 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 18:40:25.736495   18347 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0919 18:40:25.738478   18347 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 18:40:25.738522   18347 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:40:25.747992   18347 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 18:40:25.748028   18347 start.go:495] detecting cgroup driver to use...
	I0919 18:40:25.748061   18347 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 18:40:25.748184   18347 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:40:25.767569   18347 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0919 18:40:25.777554   18347 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 18:40:25.787261   18347 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 18:40:25.787323   18347 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 18:40:25.796661   18347 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 18:40:25.807050   18347 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 18:40:25.817303   18347 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 18:40:25.826457   18347 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 18:40:25.836338   18347 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 18:40:25.845913   18347 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 18:40:25.856019   18347 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 18:40:25.865810   18347 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 18:40:25.874019   18347 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 18:40:25.882015   18347 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0919 18:40:26.115300   18347 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0919 18:40:26.238502   18347 start.go:495] detecting cgroup driver to use...
	I0919 18:40:26.238576   18347 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 18:40:26.238698   18347 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:40:26.260077   18347 exec_runner.go:51] Run: which cri-dockerd
	I0919 18:40:26.260991   18347 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 18:40:26.269832   18347 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0919 18:40:26.269871   18347 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0919 18:40:26.269921   18347 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0919 18:40:26.279099   18347 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0919 18:40:26.279248   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2502331824 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0919 18:40:26.288985   18347 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0919 18:40:26.525061   18347 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0919 18:40:26.762336   18347 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 18:40:26.762515   18347 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0919 18:40:26.762533   18347 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0919 18:40:26.762589   18347 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0919 18:40:26.771934   18347 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0919 18:40:26.772098   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1704304775 /etc/docker/daemon.json
	I0919 18:40:26.780438   18347 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0919 18:40:27.000545   18347 exec_runner.go:51] Run: sudo systemctl restart docker
	I0919 18:40:27.411218   18347 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 18:40:27.422626   18347 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0919 18:40:27.438911   18347 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 18:40:27.450564   18347 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0919 18:40:27.688582   18347 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0919 18:40:27.920074   18347 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0919 18:40:28.153158   18347 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0919 18:40:28.168183   18347 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 18:40:28.180091   18347 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0919 18:40:28.430897   18347 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0919 18:40:28.500060   18347 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 18:40:28.500142   18347 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0919 18:40:28.501479   18347 start.go:563] Will wait 60s for crictl version
	I0919 18:40:28.501534   18347 exec_runner.go:51] Run: which crictl
	I0919 18:40:28.502515   18347 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0919 18:40:28.531458   18347 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0919 18:40:28.531522   18347 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0919 18:40:28.553706   18347 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0919 18:40:28.578682   18347 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0919 18:40:28.578765   18347 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0919 18:40:28.581751   18347 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0919 18:40:28.583120   18347 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 18:40:28.583246   18347 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0919 18:40:28.583256   18347 kubeadm.go:934] updating node { 10.154.0.4 8443 v1.31.1 docker true true} ...
	I0919 18:40:28.583343   18347 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-9 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.154.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0919 18:40:28.583386   18347 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0919 18:40:28.635825   18347 cni.go:84] Creating CNI manager for ""
	I0919 18:40:28.635858   18347 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 18:40:28.635870   18347 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 18:40:28.635889   18347 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.154.0.4 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-9 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.154.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.154.0.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 18:40:28.636029   18347 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.154.0.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-9"
	  kubeletExtraArgs:
	    node-ip: 10.154.0.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.154.0.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 18:40:28.636087   18347 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 18:40:28.646072   18347 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0919 18:40:28.646149   18347 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0919 18:40:28.655119   18347 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0919 18:40:28.655174   18347 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0919 18:40:28.655186   18347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7688/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0919 18:40:28.655128   18347 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0919 18:40:28.655286   18347 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:40:28.655214   18347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7688/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0919 18:40:28.667697   18347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7688/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0919 18:40:28.712003   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3877946538 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0919 18:40:28.714825   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1283627928 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0919 18:40:28.735514   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1327978258 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0919 18:40:28.803029   18347 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 18:40:28.811719   18347 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0919 18:40:28.811742   18347 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0919 18:40:28.811781   18347 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0919 18:40:28.820615   18347 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0919 18:40:28.820758   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2338998767 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0919 18:40:28.829194   18347 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0919 18:40:28.829216   18347 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0919 18:40:28.829249   18347 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0919 18:40:28.837553   18347 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 18:40:28.837702   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4234291157 /lib/systemd/system/kubelet.service
	I0919 18:40:28.847315   18347 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0919 18:40:28.847445   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1349307929 /var/tmp/minikube/kubeadm.yaml.new
	I0919 18:40:28.856409   18347 exec_runner.go:51] Run: grep 10.154.0.4	control-plane.minikube.internal$ /etc/hosts
	I0919 18:40:28.857792   18347 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0919 18:40:29.053129   18347 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0919 18:40:29.068635   18347 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube for IP: 10.154.0.4
	I0919 18:40:29.068665   18347 certs.go:194] generating shared ca certs ...
	I0919 18:40:29.068687   18347 certs.go:226] acquiring lock for ca certs: {Name:mk2494a6fbff7af07ea8e3e511939f21236be85f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:29.068822   18347 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7688/.minikube/ca.key
	I0919 18:40:29.068875   18347 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7688/.minikube/proxy-client-ca.key
	I0919 18:40:29.068889   18347 certs.go:256] generating profile certs ...
	I0919 18:40:29.068950   18347 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/client.key
	I0919 18:40:29.068976   18347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/client.crt with IP's: []
	I0919 18:40:29.215549   18347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/client.crt ...
	I0919 18:40:29.215581   18347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/client.crt: {Name:mkd857966aac671af56f0d7cd10e91c2f3e6f9c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:29.215732   18347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/client.key ...
	I0919 18:40:29.215742   18347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/client.key: {Name:mk8edb0d58ccb07268e9e94a61662a6c5e4fd355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:29.215810   18347 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/apiserver.key.1b9420d6
	I0919 18:40:29.215842   18347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/apiserver.crt.1b9420d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.154.0.4]
	I0919 18:40:29.319405   18347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/apiserver.crt.1b9420d6 ...
	I0919 18:40:29.319439   18347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/apiserver.crt.1b9420d6: {Name:mk229b522014294f0ba23ff06bfcbd711e50c576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:29.319580   18347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/apiserver.key.1b9420d6 ...
	I0919 18:40:29.319592   18347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/apiserver.key.1b9420d6: {Name:mk05143c2a2f45c141b763c4909372a86c443b15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:29.319645   18347 certs.go:381] copying /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/apiserver.crt.1b9420d6 -> /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/apiserver.crt
	I0919 18:40:29.319714   18347 certs.go:385] copying /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/apiserver.key.1b9420d6 -> /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/apiserver.key
	I0919 18:40:29.319764   18347 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/proxy-client.key
	I0919 18:40:29.319777   18347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0919 18:40:29.741646   18347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/proxy-client.crt ...
	I0919 18:40:29.741685   18347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/proxy-client.crt: {Name:mk044624f84d23bf61f81793a413d42d386cb57f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:29.741828   18347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/proxy-client.key ...
	I0919 18:40:29.741839   18347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/proxy-client.key: {Name:mk79d4f2212b812f9e3110ee3e8eb1ac4061280b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:29.741983   18347 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7688/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 18:40:29.742017   18347 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7688/.minikube/certs/ca.pem (1078 bytes)
	I0919 18:40:29.742040   18347 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7688/.minikube/certs/cert.pem (1123 bytes)
	I0919 18:40:29.742063   18347 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7688/.minikube/certs/key.pem (1675 bytes)
	I0919 18:40:29.742605   18347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7688/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 18:40:29.742730   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube541901573 /var/lib/minikube/certs/ca.crt
	I0919 18:40:29.753479   18347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7688/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 18:40:29.753615   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3267821240 /var/lib/minikube/certs/ca.key
	I0919 18:40:29.762266   18347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7688/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 18:40:29.762404   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube559044794 /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 18:40:29.770826   18347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7688/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 18:40:29.770946   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2179667425 /var/lib/minikube/certs/proxy-client-ca.key
	I0919 18:40:29.779232   18347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0919 18:40:29.779350   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2550896500 /var/lib/minikube/certs/apiserver.crt
	I0919 18:40:29.788613   18347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 18:40:29.788743   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1875049279 /var/lib/minikube/certs/apiserver.key
	I0919 18:40:29.797002   18347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 18:40:29.797174   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3885404875 /var/lib/minikube/certs/proxy-client.crt
	I0919 18:40:29.806519   18347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7688/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 18:40:29.806689   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1957564406 /var/lib/minikube/certs/proxy-client.key
	I0919 18:40:29.816142   18347 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0919 18:40:29.816166   18347 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:29.816202   18347 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:29.826190   18347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7688/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 18:40:29.826351   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3108450352 /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:29.835731   18347 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 18:40:29.835890   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1962288703 /var/lib/minikube/kubeconfig
	I0919 18:40:29.845191   18347 exec_runner.go:51] Run: openssl version
	I0919 18:40:29.848119   18347 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 18:40:29.857417   18347 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:29.858742   18347 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:29.858788   18347 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:29.861876   18347 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 18:40:29.870824   18347 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 18:40:29.872014   18347 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 18:40:29.872051   18347 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:40:29.872153   18347 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 18:40:29.888043   18347 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 18:40:29.896750   18347 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 18:40:29.905401   18347 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0919 18:40:29.926288   18347 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 18:40:29.935153   18347 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 18:40:29.935172   18347 kubeadm.go:157] found existing configuration files:
	
	I0919 18:40:29.935214   18347 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 18:40:29.943623   18347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 18:40:29.943692   18347 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 18:40:29.951678   18347 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 18:40:29.960391   18347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 18:40:29.960447   18347 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 18:40:29.968799   18347 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 18:40:29.977089   18347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 18:40:29.977140   18347 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 18:40:29.985086   18347 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 18:40:29.993436   18347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 18:40:29.993499   18347 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 18:40:30.001606   18347 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 18:40:30.035665   18347 kubeadm.go:310] W0919 18:40:30.035532   19248 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:40:30.036280   18347 kubeadm.go:310] W0919 18:40:30.036196   19248 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:40:30.038061   18347 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 18:40:30.038088   18347 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 18:40:30.131111   18347 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:40:30.131235   18347 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 18:40:30.131248   18347 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 18:40:30.131255   18347 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 18:40:30.142281   18347 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 18:40:30.145152   18347 out.go:235]   - Generating certificates and keys ...
	I0919 18:40:30.145207   18347 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 18:40:30.145222   18347 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 18:40:30.206278   18347 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 18:40:30.275909   18347 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 18:40:30.457286   18347 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 18:40:30.527595   18347 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 18:40:30.713117   18347 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 18:40:30.713216   18347 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
	I0919 18:40:30.865091   18347 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 18:40:30.865202   18347 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
	I0919 18:40:31.047280   18347 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 18:40:31.137715   18347 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 18:40:31.218788   18347 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 18:40:31.218929   18347 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 18:40:31.435098   18347 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 18:40:31.511050   18347 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 18:40:31.748675   18347 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 18:40:31.798782   18347 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:40:31.951047   18347 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:40:31.951583   18347 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:40:31.953959   18347 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:40:31.956168   18347 out.go:235]   - Booting up control plane ...
	I0919 18:40:31.956197   18347 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:40:31.956215   18347 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:40:31.957305   18347 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:40:31.989789   18347 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 18:40:31.994527   18347 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 18:40:31.994550   18347 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 18:40:32.235067   18347 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 18:40:32.235092   18347 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 18:40:33.236710   18347 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001642137s
	I0919 18:40:33.236732   18347 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 18:40:37.738967   18347 kubeadm.go:310] [api-check] The API server is healthy after 4.502211362s
	I0919 18:40:37.751286   18347 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:40:37.764446   18347 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:40:37.783848   18347 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:40:37.783872   18347 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-9 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 18:40:37.792122   18347 kubeadm.go:310] [bootstrap-token] Using token: bjuebf.1p3b4zf2wx78cvzu
	I0919 18:40:37.793711   18347 out.go:235]   - Configuring RBAC rules ...
	I0919 18:40:37.793748   18347 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:40:37.797312   18347 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 18:40:37.804876   18347 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:40:37.807571   18347 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:40:37.810239   18347 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:40:37.812777   18347 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:40:38.146234   18347 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 18:40:38.589999   18347 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 18:40:39.145995   18347 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 18:40:39.146866   18347 kubeadm.go:310] 
	I0919 18:40:39.146881   18347 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 18:40:39.146886   18347 kubeadm.go:310] 
	I0919 18:40:39.146891   18347 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 18:40:39.146896   18347 kubeadm.go:310] 
	I0919 18:40:39.146900   18347 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 18:40:39.146904   18347 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:40:39.146907   18347 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:40:39.146911   18347 kubeadm.go:310] 
	I0919 18:40:39.146915   18347 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 18:40:39.146919   18347 kubeadm.go:310] 
	I0919 18:40:39.146924   18347 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 18:40:39.146928   18347 kubeadm.go:310] 
	I0919 18:40:39.146932   18347 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 18:40:39.146936   18347 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:40:39.146940   18347 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:40:39.146944   18347 kubeadm.go:310] 
	I0919 18:40:39.146949   18347 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 18:40:39.146953   18347 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 18:40:39.146957   18347 kubeadm.go:310] 
	I0919 18:40:39.146961   18347 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bjuebf.1p3b4zf2wx78cvzu \
	I0919 18:40:39.146965   18347 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1c2cbf547934162789bad485d8ca8decd065d73847fdf9aa972c0e34a21f6ef4 \
	I0919 18:40:39.146970   18347 kubeadm.go:310] 	--control-plane 
	I0919 18:40:39.146973   18347 kubeadm.go:310] 
	I0919 18:40:39.146978   18347 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:40:39.146981   18347 kubeadm.go:310] 
	I0919 18:40:39.146985   18347 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bjuebf.1p3b4zf2wx78cvzu \
	I0919 18:40:39.146990   18347 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1c2cbf547934162789bad485d8ca8decd065d73847fdf9aa972c0e34a21f6ef4 
	I0919 18:40:39.149521   18347 cni.go:84] Creating CNI manager for ""
	I0919 18:40:39.149545   18347 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 18:40:39.152338   18347 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 18:40:39.153727   18347 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0919 18:40:39.164432   18347 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 18:40:39.164581   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3908928204 /etc/cni/net.d/1-k8s.conflist
	I0919 18:40:39.175689   18347 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:40:39.175748   18347 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:39.175773   18347 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-9 minikube.k8s.io/updated_at=2024_09_19T18_40_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0919 18:40:39.184763   18347 ops.go:34] apiserver oom_adj: -16
	I0919 18:40:39.242630   18347 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:39.743717   18347 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:40.243707   18347 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:40.743082   18347 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:41.243070   18347 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:41.743421   18347 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:42.242977   18347 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:42.742780   18347 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:42.808727   18347 kubeadm.go:1113] duration metric: took 3.633028254s to wait for elevateKubeSystemPrivileges
	I0919 18:40:42.808757   18347 kubeadm.go:394] duration metric: took 12.936710692s to StartCluster
	I0919 18:40:42.808775   18347 settings.go:142] acquiring lock: {Name:mka5c3fd1843d5ede2983e2252b2082478095993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:42.808833   18347 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-7688/kubeconfig
	I0919 18:40:42.809402   18347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7688/kubeconfig: {Name:mk79e2b2e007976df334d89913551612eb77de55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:42.809644   18347 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:40:42.809721   18347 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 18:40:42.809842   18347 addons.go:69] Setting yakd=true in profile "minikube"
	I0919 18:40:42.809862   18347 addons.go:234] Setting addon yakd=true in "minikube"
	I0919 18:40:42.809869   18347 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0919 18:40:42.809883   18347 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0919 18:40:42.809901   18347 addons.go:69] Setting registry=true in profile "minikube"
	I0919 18:40:42.809908   18347 addons.go:69] Setting volcano=true in profile "minikube"
	I0919 18:40:42.809901   18347 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0919 18:40:42.809914   18347 addons.go:234] Setting addon registry=true in "minikube"
	I0919 18:40:42.809920   18347 addons.go:234] Setting addon volcano=true in "minikube"
	I0919 18:40:42.809921   18347 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0919 18:40:42.809923   18347 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0919 18:40:42.809933   18347 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0919 18:40:42.809945   18347 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0919 18:40:42.809945   18347 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0919 18:40:42.809950   18347 host.go:66] Checking if "minikube" exists ...
	I0919 18:40:42.809899   18347 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0919 18:40:42.809955   18347 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0919 18:40:42.809959   18347 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0919 18:40:42.809966   18347 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0919 18:40:42.809973   18347 host.go:66] Checking if "minikube" exists ...
	I0919 18:40:42.809966   18347 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:40:42.809992   18347 host.go:66] Checking if "minikube" exists ...
	I0919 18:40:42.809945   18347 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0919 18:40:42.810025   18347 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0919 18:40:42.810057   18347 host.go:66] Checking if "minikube" exists ...
	I0919 18:40:42.809949   18347 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0919 18:40:42.810119   18347 host.go:66] Checking if "minikube" exists ...
	I0919 18:40:42.810597   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.810617   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.810629   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.810639   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.809926   18347 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0919 18:40:42.810685   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:42.810702   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.810723   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.810759   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:42.810684   18347 host.go:66] Checking if "minikube" exists ...
	I0919 18:40:42.809937   18347 host.go:66] Checking if "minikube" exists ...
	I0919 18:40:42.810708   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.810913   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.810949   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:42.809892   18347 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0919 18:40:42.809937   18347 mustload.go:65] Loading cluster: minikube
	I0919 18:40:42.811122   18347 host.go:66] Checking if "minikube" exists ...
	I0919 18:40:42.809939   18347 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0919 18:40:42.811390   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.811402   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.811431   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:42.809953   18347 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0919 18:40:42.811488   18347 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0919 18:40:42.811496   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.811510   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.811526   18347 host.go:66] Checking if "minikube" exists ...
	I0919 18:40:42.811543   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:42.811617   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.811629   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.811658   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:42.811742   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.811756   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.811788   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:42.811941   18347 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 18:40:42.812209   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.812233   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.812264   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:42.812317   18347 out.go:177] * Configuring local host environment ...
	I0919 18:40:42.812414   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.812433   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.812465   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:42.810652   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 18:40:42.814328   18347 out.go:270] * 
	W0919 18:40:42.814400   18347 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	I0919 18:40:42.809914   18347 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0919 18:40:42.810708   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.810710   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.814498   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.814537   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:42.809891   18347 host.go:66] Checking if "minikube" exists ...
	I0919 18:40:42.815355   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.815377   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.815406   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:42.815515   18347 host.go:66] Checking if "minikube" exists ...
	I0919 18:40:42.815575   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.815760   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 18:40:42.816082   18347 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0919 18:40:42.816099   18347 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0919 18:40:42.816107   18347 out.go:270] * 
	W0919 18:40:42.816155   18347 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0919 18:40:42.817738   18347 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0919 18:40:42.817755   18347 out.go:270] * 
	W0919 18:40:42.817820   18347 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0919 18:40:42.817868   18347 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0919 18:40:42.817889   18347 out.go:270] * 
	W0919 18:40:42.817896   18347 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0919 18:40:42.817938   18347 start.go:235] Will wait 6m0s for node &{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 18:40:42.822625   18347 out.go:177] * Verifying Kubernetes components...
	I0919 18:40:42.824109   18347 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0919 18:40:42.831223   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.831496   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.833054   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.834235   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.838054   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.844650   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.844684   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.844727   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:42.849983   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.850223   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.850464   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.850494   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.850641   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.863100   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.863183   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.863100   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.863247   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.863373   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.863415   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.866580   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.866649   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.869460   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.869535   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.869917   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.869982   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.870761   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.873225   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.873278   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.874195   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.874248   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.885688   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.886107   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:42.886130   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:42.886808   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.886873   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.887096   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:42.887115   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:42.887532   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.889996   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.890885   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:42.890906   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:42.891771   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:42.891793   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:42.894917   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:42.894956   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:42.896852   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:42.897811   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:42.900742   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:42.901183   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.901242   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.901414   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.901455   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.901456   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.901504   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.901712   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:42.901732   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:42.901968   18347 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0919 18:40:42.902034   18347 host.go:66] Checking if "minikube" exists ...
	I0919 18:40:42.902266   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:42.903009   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.903077   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.903128   18347 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0919 18:40:42.903150   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:42.904232   18347 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 18:40:42.905732   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:42.906058   18347 out.go:177]   - Using image docker.io/registry:2.8.3
	I0919 18:40:42.906247   18347 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 18:40:42.906358   18347 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 18:40:42.906638   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:42.907226   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3553141232 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 18:40:42.907600   18347 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0919 18:40:42.907627   18347 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0919 18:40:42.907635   18347 host.go:66] Checking if "minikube" exists ...
	I0919 18:40:42.907839   18347 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 18:40:42.907906   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 18:40:42.908346   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:42.908358   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:42.908389   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:42.908685   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3051562957 /etc/kubernetes/addons/registry-rc.yaml
	I0919 18:40:42.909110   18347 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:40:42.909165   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 18:40:42.909362   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1131908623 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:40:42.915319   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:42.915430   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:42.915703   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.915774   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.915881   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:42.915900   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:42.916533   18347 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0919 18:40:42.918709   18347 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0919 18:40:42.918847   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:42.918873   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:42.921853   18347 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0919 18:40:42.924018   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:42.925167   18347 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0919 18:40:42.925212   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0919 18:40:42.925255   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:42.925947   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube333733063 /etc/kubernetes/addons/volcano-deployment.yaml
	I0919 18:40:42.926332   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:42.926708   18347 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0919 18:40:42.927807   18347 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:40:42.927895   18347 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 18:40:42.928776   18347 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:40:42.928805   18347 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:40:42.928960   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1040404783 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:40:42.929908   18347 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 18:40:42.929943   18347 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 18:40:42.930103   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2793607498 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 18:40:42.930271   18347 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 18:40:42.930300   18347 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 18:40:42.930496   18347 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:40:42.930513   18347 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0919 18:40:42.930520   18347 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:40:42.930558   18347 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:40:42.930650   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3871875242 /etc/kubernetes/addons/yakd-ns.yaml
	I0919 18:40:42.930887   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:42.930909   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:42.931384   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:42.931401   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:42.937632   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:42.937751   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:42.938530   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.938593   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.938779   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:42.939395   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:42.940692   18347 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 18:40:42.941623   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.941659   18347 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:40:42.944363   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:42.944391   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:42.944479   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:42.944947   18347 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 18:40:42.944962   18347 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0919 18:40:42.946147   18347 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0919 18:40:42.946327   18347 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0919 18:40:42.946360   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0919 18:40:42.946486   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3589643464 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0919 18:40:42.947178   18347 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:40:42.947200   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 18:40:42.947329   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1007511896 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:40:42.947632   18347 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0919 18:40:42.947665   18347 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 18:40:42.947721   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 18:40:42.947920   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube729239756 /etc/kubernetes/addons/deployment.yaml
	I0919 18:40:42.950532   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:42.950595   18347 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 18:40:42.952362   18347 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 18:40:42.952385   18347 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 18:40:42.952474   18347 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0919 18:40:42.952479   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2060377037 /etc/kubernetes/addons/registry-svc.yaml
	I0919 18:40:42.953901   18347 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0919 18:40:42.954285   18347 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0919 18:40:42.954320   18347 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0919 18:40:42.954548   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube242408396 /etc/kubernetes/addons/ig-namespace.yaml
	I0919 18:40:42.954690   18347 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 18:40:42.957502   18347 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 18:40:42.958038   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:42.958062   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:42.959389   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:42.960200   18347 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 18:40:42.960227   18347 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 18:40:42.960351   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1770211865 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 18:40:42.961537   18347 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 18:40:42.962020   18347 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:40:42.963180   18347 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 18:40:42.963334   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:42.963368   18347 host.go:66] Checking if "minikube" exists ...
	I0919 18:40:42.965630   18347 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 18:40:42.966294   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube10105041 /etc/kubernetes/addons/yakd-sa.yaml
	I0919 18:40:42.966614   18347 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 18:40:42.966669   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.966718   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.968243   18347 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 18:40:42.968277   18347 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 18:40:42.968401   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3818114451 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 18:40:42.971089   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:40:42.971245   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube377283491 /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:40:42.981112   18347 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:40:42.981154   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 18:40:42.981291   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1928163416 /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:40:42.981502   18347 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:40:42.981535   18347 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:40:42.981682   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1262892543 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:40:42.987926   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:42.988085   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:42.996138   18347 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 18:40:42.996177   18347 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 18:40:42.997418   18347 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 18:40:42.997474   18347 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0919 18:40:42.997500   18347 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0919 18:40:42.997631   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube343690779 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0919 18:40:42.997941   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube39427377 /etc/kubernetes/addons/yakd-crb.yaml
	I0919 18:40:43.000542   18347 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0919 18:40:43.000570   18347 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0919 18:40:43.000694   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube7687927 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0919 18:40:43.004580   18347 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 18:40:43.004631   18347 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 18:40:43.004871   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1470706512 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 18:40:43.009892   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:43.009924   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:43.010746   18347 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:40:43.010771   18347 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:40:43.010917   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube402364682 /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:40:43.015275   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:43.015628   18347 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 18:40:43.015658   18347 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 18:40:43.016603   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2572137215 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 18:40:43.018354   18347 out.go:177]   - Using image docker.io/busybox:stable
	I0919 18:40:43.019648   18347 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:40:43.021984   18347 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0919 18:40:43.022182   18347 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:40:43.023083   18347 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:40:43.023104   18347 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0919 18:40:43.023194   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2479275991 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:40:43.023775   18347 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:40:43.023845   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 18:40:43.023994   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1138263087 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:40:43.028647   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:43.028678   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:43.036072   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:43.036122   18347 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:40:43.036139   18347 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0919 18:40:43.036147   18347 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0919 18:40:43.036218   18347 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:40:43.038049   18347 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 18:40:43.038083   18347 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 18:40:43.038200   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3734502322 /etc/kubernetes/addons/yakd-svc.yaml
	I0919 18:40:43.039671   18347 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 18:40:43.039703   18347 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 18:40:43.039926   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3166678648 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 18:40:43.042707   18347 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:40:43.044563   18347 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0919 18:40:43.044601   18347 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0919 18:40:43.044718   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4207286091 /etc/kubernetes/addons/ig-role.yaml
	I0919 18:40:43.044801   18347 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 18:40:43.044829   18347 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 18:40:43.044955   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1247026014 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 18:40:43.048636   18347 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:40:43.054436   18347 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:40:43.054625   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1377865606 /etc/kubernetes/addons/storageclass.yaml
	I0919 18:40:43.063227   18347 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 18:40:43.063263   18347 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 18:40:43.063405   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4048375533 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 18:40:43.075088   18347 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:40:43.075127   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 18:40:43.075274   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2504877161 /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:40:43.089639   18347 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0919 18:40:43.089710   18347 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0919 18:40:43.089920   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2166285816 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0919 18:40:43.096871   18347 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:43.096917   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 18:40:43.097066   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4235799609 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:43.098863   18347 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:40:43.100886   18347 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:40:43.104803   18347 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:40:43.114325   18347 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 18:40:43.114368   18347 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 18:40:43.114508   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube978109645 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 18:40:43.116476   18347 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:43.180031   18347 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 18:40:43.180073   18347 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 18:40:43.180230   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2225662024 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 18:40:43.196340   18347 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 18:40:43.196378   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 18:40:43.196527   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1241784179 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 18:40:43.213267   18347 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 18:40:43.213321   18347 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 18:40:43.213469   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2241004556 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 18:40:43.261821   18347 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0919 18:40:43.261860   18347 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0919 18:40:43.262053   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube250618913 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0919 18:40:43.277683   18347 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 18:40:43.277725   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 18:40:43.278678   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4064984184 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 18:40:43.284762   18347 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0919 18:40:43.307879   18347 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 18:40:43.307925   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 18:40:43.308110   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2270491158 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 18:40:43.312221   18347 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0919 18:40:43.312281   18347 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0919 18:40:43.312558   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3676109610 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0919 18:40:43.341023   18347 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 18:40:43.341058   18347 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0919 18:40:43.341197   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3592966045 /etc/kubernetes/addons/ig-crd.yaml
	I0919 18:40:43.376867   18347 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-9" to be "Ready" ...
	I0919 18:40:43.380099   18347 node_ready.go:49] node "ubuntu-20-agent-9" has status "Ready":"True"
	I0919 18:40:43.380129   18347 node_ready.go:38] duration metric: took 3.233024ms for node "ubuntu-20-agent-9" to be "Ready" ...
	I0919 18:40:43.380143   18347 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:40:43.395709   18347 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:43.437301   18347 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:40:43.437339   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0919 18:40:43.438186   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2784745492 /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:40:43.444213   18347 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:40:43.444246   18347 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 18:40:43.444387   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3925741430 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:40:43.447481   18347 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0919 18:40:43.460850   18347 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:40:43.475426   18347 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:40:43.628914   18347 addons.go:475] Verifying addon registry=true in "minikube"
	I0919 18:40:43.631607   18347 out.go:177] * Verifying registry addon...
	I0919 18:40:43.634462   18347 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0919 18:40:43.638790   18347 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:40:43.961268   18347 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0919 18:40:44.138475   18347 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.033613188s)
	I0919 18:40:44.141040   18347 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0919 18:40:44.146124   18347 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:40:44.146159   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:44.172768   18347 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.124082858s)
	I0919 18:40:44.172802   18347 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0919 18:40:44.248045   18347 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.14913285s)
	I0919 18:40:44.489350   18347 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.028443654s)
	I0919 18:40:44.642760   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:44.962187   18347 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.84565981s)
	W0919 18:40:44.962253   18347 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:40:44.962284   18347 retry.go:31] will retry after 176.598585ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:40:45.139044   18347 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:45.143177   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:45.413975   18347 pod_ready.go:103] pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:45.638967   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:45.989588   18347 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.035649157s)
	I0919 18:40:46.139939   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:46.338110   18347 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.862576334s)
	I0919 18:40:46.338151   18347 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0919 18:40:46.339793   18347 out.go:177] * Verifying csi-hostpath-driver addon...
	I0919 18:40:46.342165   18347 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 18:40:46.349241   18347 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:40:46.349272   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:46.405271   18347 pod_ready.go:93] pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:46.405304   18347 pod_ready.go:82] duration metric: took 3.009333664s for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:46.405316   18347 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:46.639610   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:46.847373   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:47.138694   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:47.347210   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:47.638171   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:47.846766   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:48.106295   18347 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.967193151s)
	I0919 18:40:48.138768   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:48.346439   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:48.411575   18347 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:48.638747   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:48.847200   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:49.138907   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:49.347321   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:49.638609   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:49.848441   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:49.971182   18347 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 18:40:49.971355   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1872321953 /var/lib/minikube/google_application_credentials.json
	I0919 18:40:49.983103   18347 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 18:40:49.983242   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3882272232 /var/lib/minikube/google_cloud_project
	I0919 18:40:49.995920   18347 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0919 18:40:49.995979   18347 host.go:66] Checking if "minikube" exists ...
	I0919 18:40:49.996483   18347 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0919 18:40:49.996503   18347 api_server.go:166] Checking apiserver status ...
	I0919 18:40:49.996535   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:50.018337   18347 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19666/cgroup
	I0919 18:40:50.031254   18347 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7"
	I0919 18:40:50.031330   18347 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/6c315280a56d46b360c59898d30df121ea69207e0d0ccc102e8357a0701ddcc7/freezer.state
	I0919 18:40:50.044000   18347 api_server.go:204] freezer state: "THAWED"
	I0919 18:40:50.044036   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:50.049669   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:50.049740   18347 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 18:40:50.054231   18347 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:50.056055   18347 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0919 18:40:50.058052   18347 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 18:40:50.058102   18347 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 18:40:50.058275   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1782457748 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 18:40:50.070532   18347 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 18:40:50.070572   18347 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 18:40:50.070704   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube794085076 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 18:40:50.083501   18347 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:40:50.083531   18347 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 18:40:50.083670   18347 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3000137192 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:40:50.097098   18347 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:40:50.138959   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:50.347282   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:50.411914   18347 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:50.499790   18347 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0919 18:40:50.501561   18347 out.go:177] * Verifying gcp-auth addon...
	I0919 18:40:50.504268   18347 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 18:40:50.506846   18347 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:40:50.638603   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:50.847283   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:51.138267   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:51.347541   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:51.412214   18347 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:51.412244   18347 pod_ready.go:82] duration metric: took 5.006918559s for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:51.412259   18347 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:51.417099   18347 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:51.417121   18347 pod_ready.go:82] duration metric: took 4.853783ms for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:51.417133   18347 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:51.421191   18347 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:51.421211   18347 pod_ready.go:82] duration metric: took 4.068912ms for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:51.421221   18347 pod_ready.go:39] duration metric: took 8.041064997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:40:51.421241   18347 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:40:51.421298   18347 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:40:51.443019   18347 api_server.go:72] duration metric: took 8.625043733s to wait for apiserver process to appear ...
	I0919 18:40:51.443048   18347 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:40:51.443071   18347 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0919 18:40:51.447326   18347 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0919 18:40:51.448319   18347 api_server.go:141] control plane version: v1.31.1
	I0919 18:40:51.448343   18347 api_server.go:131] duration metric: took 5.288171ms to wait for apiserver health ...
	I0919 18:40:51.448353   18347 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:40:51.457088   18347 system_pods.go:59] 17 kube-system pods found
	I0919 18:40:51.457120   18347 system_pods.go:61] "coredns-7c65d6cfc9-bjhkw" [16b034a1-88cf-404f-bcf3-30549118d4ec] Running
	I0919 18:40:51.457131   18347 system_pods.go:61] "csi-hostpath-attacher-0" [76c7fd8a-aa16-4ddb-abd5-7a8f8f166d2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 18:40:51.457141   18347 system_pods.go:61] "csi-hostpath-resizer-0" [6de77255-2fb2-462a-9abd-72bade5caf5f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 18:40:51.457155   18347 system_pods.go:61] "csi-hostpathplugin-p9scz" [e7a785fc-3d97-486a-9c29-2bcf2a78f3a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 18:40:51.457167   18347 system_pods.go:61] "etcd-ubuntu-20-agent-9" [9a65c6da-374f-47c2-aa4f-8eb8d844e3e4] Running
	I0919 18:40:51.457177   18347 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-9" [bdafdc8e-8407-4290-b66a-68756d201c6c] Running
	I0919 18:40:51.457213   18347 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-9" [564090ca-7caf-40ee-8928-de38bf740aff] Running
	I0919 18:40:51.457223   18347 system_pods.go:61] "kube-proxy-5npzc" [09c79aa2-6c89-491b-909a-fc084a7cadd9] Running
	I0919 18:40:51.457229   18347 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-9" [f8441e6f-ffc0-4ec4-8984-394248f45713] Running
	I0919 18:40:51.457239   18347 system_pods.go:61] "metrics-server-84c5f94fbc-69lbb" [fe8f0275-1e1a-4ccf-a704-5147f0ae5bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:40:51.457251   18347 system_pods.go:61] "nvidia-device-plugin-daemonset-lqwtk" [33f623f6-129c-4e20-bd9d-5eaefadc7a64] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0919 18:40:51.457265   18347 system_pods.go:61] "registry-66c9cd494c-bhm68" [d43e593d-3a8a-4415-b4cc-21dfcb93cfab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0919 18:40:51.457276   18347 system_pods.go:61] "registry-proxy-6wfzk" [d311bda0-57b4-46a9-a6b7-207e358dc87d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0919 18:40:51.457289   18347 system_pods.go:61] "snapshot-controller-56fcc65765-2l8p5" [51294786-f6c0-486a-9d80-6df2f92cab5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:40:51.457300   18347 system_pods.go:61] "snapshot-controller-56fcc65765-n4htf" [f29bab47-8f48-43fe-9d02-32a6c2d754f5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:40:51.457310   18347 system_pods.go:61] "storage-provisioner" [a5b76d81-02e8-4fd7-b11a-497eac82884d] Running
	I0919 18:40:51.457318   18347 system_pods.go:61] "tiller-deploy-b48cc5f79-m992c" [fe8a69b7-2aed-4388-bbfa-e21130b46e0e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0919 18:40:51.457328   18347 system_pods.go:74] duration metric: took 8.969358ms to wait for pod list to return data ...
	I0919 18:40:51.457338   18347 default_sa.go:34] waiting for default service account to be created ...
	I0919 18:40:51.459948   18347 default_sa.go:45] found service account: "default"
	I0919 18:40:51.459973   18347 default_sa.go:55] duration metric: took 2.625548ms for default service account to be created ...
	I0919 18:40:51.459984   18347 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 18:40:51.468430   18347 system_pods.go:86] 17 kube-system pods found
	I0919 18:40:51.468465   18347 system_pods.go:89] "coredns-7c65d6cfc9-bjhkw" [16b034a1-88cf-404f-bcf3-30549118d4ec] Running
	I0919 18:40:51.468479   18347 system_pods.go:89] "csi-hostpath-attacher-0" [76c7fd8a-aa16-4ddb-abd5-7a8f8f166d2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 18:40:51.468488   18347 system_pods.go:89] "csi-hostpath-resizer-0" [6de77255-2fb2-462a-9abd-72bade5caf5f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 18:40:51.468499   18347 system_pods.go:89] "csi-hostpathplugin-p9scz" [e7a785fc-3d97-486a-9c29-2bcf2a78f3a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 18:40:51.468505   18347 system_pods.go:89] "etcd-ubuntu-20-agent-9" [9a65c6da-374f-47c2-aa4f-8eb8d844e3e4] Running
	I0919 18:40:51.468515   18347 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [bdafdc8e-8407-4290-b66a-68756d201c6c] Running
	I0919 18:40:51.468523   18347 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [564090ca-7caf-40ee-8928-de38bf740aff] Running
	I0919 18:40:51.468531   18347 system_pods.go:89] "kube-proxy-5npzc" [09c79aa2-6c89-491b-909a-fc084a7cadd9] Running
	I0919 18:40:51.468537   18347 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [f8441e6f-ffc0-4ec4-8984-394248f45713] Running
	I0919 18:40:51.468549   18347 system_pods.go:89] "metrics-server-84c5f94fbc-69lbb" [fe8f0275-1e1a-4ccf-a704-5147f0ae5bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:40:51.468560   18347 system_pods.go:89] "nvidia-device-plugin-daemonset-lqwtk" [33f623f6-129c-4e20-bd9d-5eaefadc7a64] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0919 18:40:51.468571   18347 system_pods.go:89] "registry-66c9cd494c-bhm68" [d43e593d-3a8a-4415-b4cc-21dfcb93cfab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0919 18:40:51.468581   18347 system_pods.go:89] "registry-proxy-6wfzk" [d311bda0-57b4-46a9-a6b7-207e358dc87d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0919 18:40:51.468593   18347 system_pods.go:89] "snapshot-controller-56fcc65765-2l8p5" [51294786-f6c0-486a-9d80-6df2f92cab5b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:40:51.468601   18347 system_pods.go:89] "snapshot-controller-56fcc65765-n4htf" [f29bab47-8f48-43fe-9d02-32a6c2d754f5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 18:40:51.468609   18347 system_pods.go:89] "storage-provisioner" [a5b76d81-02e8-4fd7-b11a-497eac82884d] Running
	I0919 18:40:51.468622   18347 system_pods.go:89] "tiller-deploy-b48cc5f79-m992c" [fe8a69b7-2aed-4388-bbfa-e21130b46e0e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0919 18:40:51.468633   18347 system_pods.go:126] duration metric: took 8.641944ms to wait for k8s-apps to be running ...
	I0919 18:40:51.468652   18347 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 18:40:51.468709   18347 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:40:51.484214   18347 system_svc.go:56] duration metric: took 15.549189ms WaitForService to wait for kubelet
	I0919 18:40:51.484251   18347 kubeadm.go:582] duration metric: took 8.66628113s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:40:51.484275   18347 node_conditions.go:102] verifying NodePressure condition ...
	I0919 18:40:51.487264   18347 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 18:40:51.487294   18347 node_conditions.go:123] node cpu capacity is 8
	I0919 18:40:51.487308   18347 node_conditions.go:105] duration metric: took 3.0276ms to run NodePressure ...
	I0919 18:40:51.487323   18347 start.go:241] waiting for startup goroutines ...
	I0919 18:40:51.638892   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:51.847424   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:52.138266   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:52.347465   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:52.639092   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:52.846517   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:53.173286   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:53.348058   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:53.638116   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:53.847119   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:54.137980   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:54.347377   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:54.638684   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:54.846143   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.138994   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.346706   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.637807   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.846324   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.137721   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.346137   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.638147   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.846688   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.137499   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.347514   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.638517   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.845897   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.137911   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.347088   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.638391   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.847836   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.139082   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.346608   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.638126   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.848499   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.138658   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.346892   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.638033   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.846873   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.138555   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.346450   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.638654   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.846255   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.138566   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.345973   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.638116   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.846337   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.138464   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.346786   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.637418   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.846002   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.138055   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.346530   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.638296   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.846714   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.138306   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.347045   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.638463   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.845929   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.137971   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.346353   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.638977   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.846422   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.138056   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.346634   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.638539   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.845901   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.137751   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:08.346277   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.638509   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:08.847516   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.138805   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:09.346744   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.638182   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:09.847662   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.137600   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:10.346209   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.638141   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:10.847158   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.138328   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:11.346994   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.638401   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:11.846695   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.139114   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:12.346533   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.638284   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:12.846830   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.138850   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:13.345926   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.638774   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:13.846257   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.137728   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:14.346087   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.638532   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:14.847098   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.138300   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:15.346479   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.638612   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:15.846544   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.138196   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:16.347223   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.638480   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:16.846806   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.138417   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:17.347188   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.637901   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:17.846471   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.137789   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:18.346371   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.638047   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:18.847229   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.138663   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:19.346757   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.638912   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:19.846511   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.138381   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:20.346836   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.638779   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:20.846618   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.139945   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:21.346399   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.638605   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:21.847504   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:22.137893   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:22.346162   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:22.638042   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:22.846485   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:23.138039   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:23.346666   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:23.638432   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:23.847002   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:24.137419   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:24.347354   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:24.638618   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:24.845891   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:25.138349   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:25.346936   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:25.638987   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:25.845927   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:26.137932   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:26.346346   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:26.638568   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:26.848006   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:27.138477   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:27.345840   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:27.638849   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:27.846180   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:28.138190   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:28.346672   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:28.638957   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:28.846682   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:29.138281   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:29.346214   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:29.638410   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:29.846321   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:30.138152   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:30.346772   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:30.638757   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:30.846037   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:31.137718   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:31.346230   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:31.638583   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:31.846962   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:32.138379   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:32.347221   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:32.637907   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:32.846657   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:33.138292   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:33.346827   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:33.637961   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:33.846764   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:34.138390   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:34.346869   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:34.638103   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:34.846465   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:35.138351   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:35.346853   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:35.638089   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:35.846872   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:36.138011   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:36.347074   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:36.639024   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:36.847159   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:37.137698   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:37.346109   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:37.637975   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:37.846437   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:38.138786   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:38.346350   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:38.638333   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:38.846853   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:39.137993   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:39.346769   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:39.638602   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:39.847322   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:40.138317   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:40.347387   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:40.637825   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:40.846259   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:41.137792   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:41.346689   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:41.638114   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:41.847192   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:42.137868   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:42.347581   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:42.638407   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:42.846158   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:43.137493   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:43.345578   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:43.638114   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:43.846974   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:44.138417   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:44.346756   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:44.638876   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:44.846553   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:45.138382   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:45.347136   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:45.638618   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:45.846591   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:46.138488   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:46.346919   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:46.638904   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:46.845685   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:47.137847   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:47.346154   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:47.638254   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:47.847130   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:48.137998   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:48.346549   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:48.638398   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:48.846816   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:49.138719   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:49.346235   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:49.637625   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:49.846861   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:50.137837   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:50.345888   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:50.638106   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:50.846796   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:51.138605   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:51.345971   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:51.637961   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:51.846690   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:52.137549   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:52.346142   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:52.637922   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:52.846109   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:53.138286   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:53.346666   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:53.637981   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:53.846537   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:54.137527   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:54.345931   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:54.637909   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:54.846438   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:55.138090   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:55.346495   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:55.638263   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:55.846287   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:56.138340   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:56.345989   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:56.638216   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:56.846790   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:57.138449   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:57.347345   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:57.638566   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:57.847232   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:58.137791   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:58.346330   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:58.638967   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:58.846193   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:59.137563   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:59.346121   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:59.637864   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:59.846582   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:00.138360   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:00.347217   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:00.638756   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:00.846794   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:01.139152   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:01.346978   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:01.638298   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:01.847485   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:02.138509   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:02.346194   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:02.638140   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:02.846558   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:03.139208   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:03.347098   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:03.638553   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:03.848726   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:04.137956   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:04.346512   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:04.639064   18347 kapi.go:107] duration metric: took 1m21.0045993s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 18:42:04.846705   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:05.346975   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:05.847541   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:06.346783   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:06.847086   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:07.346321   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:07.847228   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:08.346968   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:08.846756   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:09.346931   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:09.847510   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:10.346726   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:10.846719   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:11.346204   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:11.847252   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:12.346288   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:12.846560   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:13.347466   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:13.846835   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:14.346982   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:14.847603   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:15.347067   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:15.846494   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:16.346794   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:16.847419   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:17.347387   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:17.846736   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:18.347764   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:18.847981   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:19.347080   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:19.846856   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:20.346736   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:20.846386   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:21.346683   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:21.847198   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:22.347059   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:22.846753   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:23.346892   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:23.846884   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:24.346686   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:24.846334   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:25.346761   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:25.846695   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:26.347549   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:26.847140   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:27.346535   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:27.846945   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:28.346865   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:28.846713   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:29.346946   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:29.847848   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:30.347657   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:30.847352   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:31.347203   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:31.846680   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:32.346407   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:32.846950   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:33.347553   18347 kapi.go:107] duration metric: took 1m47.005387392s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 18:43:34.508010   18347 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:43:34.508036   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:35.008357   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:35.507668   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:36.008136   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:36.508435   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:37.007424   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:37.507500   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:38.007962   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:38.508469   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:39.007481   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:39.508098   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:40.008649   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:40.508067   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:41.007147   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:41.507756   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:42.007762   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:42.508004   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:43.007617   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:43.507393   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:44.007282   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:44.507862   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:45.007768   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:45.508123   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:46.007509   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:46.507717   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:47.007717   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:47.507755   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:48.008208   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:48.507652   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:49.007574   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:49.507990   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:50.007597   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:50.507870   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:51.007969   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:51.507764   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:52.007760   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:52.507979   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:53.008007   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:53.508137   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:54.008220   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:54.507995   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:55.008231   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:55.507229   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:56.007893   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:56.508045   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:57.007523   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:57.507548   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:58.007168   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:58.507944   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:59.008105   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:59.508644   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:00.007918   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:00.507745   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:01.008424   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:01.507461   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:02.007460   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:02.507862   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:03.008420   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:03.507875   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:04.008100   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:04.508685   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:05.006875   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:05.507387   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:06.007571   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:06.508033   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:07.008006   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:07.507752   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:08.008406   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:08.508109   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:09.008081   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:09.508447   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:10.007784   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:10.508146   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:11.008991   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:11.508018   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:12.008336   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:12.508179   18347 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:44:13.008498   18347 kapi.go:107] duration metric: took 3m22.504229366s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 18:44:13.010739   18347 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0919 18:44:13.012496   18347 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 18:44:13.014251   18347 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 18:44:13.016130   18347 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, helm-tiller, storage-provisioner, yakd, metrics-server, storage-provisioner-rancher, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0919 18:44:13.018020   18347 addons.go:510] duration metric: took 3m30.208285554s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner helm-tiller storage-provisioner yakd metrics-server storage-provisioner-rancher inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0919 18:44:13.018070   18347 start.go:246] waiting for cluster config update ...
	I0919 18:44:13.018096   18347 start.go:255] writing updated cluster config ...
	I0919 18:44:13.018372   18347 exec_runner.go:51] Run: rm -f paused
	I0919 18:44:13.064180   18347 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0919 18:44:13.066425   18347 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Fri 2024-09-13 07:37:52 UTC, end at Thu 2024-09-19 18:54:05 UTC. --
	Sep 19 18:48:27 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:48:27.293188845Z" level=info msg="ignoring event" container=d2f1969fb7f3194845c46b1806339206e991b28010c3a333ec7184ebaa139959 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:50:41 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:50:41.794145364Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 19 18:50:41 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:50:41.797046041Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 19 18:53:04 ubuntu-20-agent-9 cri-dockerd[18908]: time="2024-09-19T18:53:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7a06642af09bbc9bbb3fc640b9ebefae04d71027422ea3eb5a753b2cef020bb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 18:53:04 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:53:04.987086420Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 19 18:53:04 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:53:04.989315189Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 19 18:53:15 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:53:15.768730026Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 19 18:53:15 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:53:15.771026686Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 19 18:53:28 ubuntu-20-agent-9 cri-dockerd[18908]: time="2024-09-19T18:53:28Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 19 18:53:30 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:53:30.137892996Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 19 18:53:30 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:53:30.137912166Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 19 18:53:30 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:53:30.140271392Z" level=error msg="Error running exec 7363dcca917ba665e27b4cdcb5551bfbc6947ff836fb10db5dff52cbfdf2c8ab in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 19 18:53:30 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:53:30.151658358Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 19 18:53:30 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:53:30.151658671Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 19 18:53:30 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:53:30.153312934Z" level=error msg="Error running exec 693eed1d3655f5d471a5efc45e37397e3c3d4df4a6391d4fc93597e430ae0f3c in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 19 18:53:30 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:53:30.372289216Z" level=info msg="ignoring event" container=0b85a20c807c241737543f2d215e651b8331e46e0b6a2b303f9e66a331c4127c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:53:30 ubuntu-20-agent-9 cri-dockerd[18908]: time="2024-09-19T18:53:30Z" level=error msg="error getting RW layer size for container ID 'd2f1969fb7f3194845c46b1806339206e991b28010c3a333ec7184ebaa139959': Error response from daemon: No such container: d2f1969fb7f3194845c46b1806339206e991b28010c3a333ec7184ebaa139959"
	Sep 19 18:53:30 ubuntu-20-agent-9 cri-dockerd[18908]: time="2024-09-19T18:53:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd2f1969fb7f3194845c46b1806339206e991b28010c3a333ec7184ebaa139959'"
	Sep 19 18:53:41 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:53:41.765385480Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 19 18:53:41 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:53:41.767730357Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 19 18:54:04 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:54:04.427303539Z" level=info msg="ignoring event" container=d7a06642af09bbc9bbb3fc640b9ebefae04d71027422ea3eb5a753b2cef020bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:54:04 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:54:04.741550529Z" level=info msg="ignoring event" container=b225e5832fc23e45f141404ceeb488eb7263847a538791d0bded3c4cc1759b65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:54:04 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:54:04.817097635Z" level=info msg="ignoring event" container=c4505d044bf6af31bc0a41a813bfa78063bd956eb4a4663babb31bdb508010c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:54:04 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:54:04.896503557Z" level=info msg="ignoring event" container=43f40aa3b79e7024d0113df199e66362fa4523ec9b089a33f3a018be29cb5761 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 18:54:05 ubuntu-20-agent-9 dockerd[18564]: time="2024-09-19T18:54:05.014419990Z" level=info msg="ignoring event" container=4dc21e26fb9ae30adc7e362b1bea65cadad915d9a06e6175b03671737908fe00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	0b85a20c807c2       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            37 seconds ago      Exited              gadget                                   7                   c4cdd5536672f       gadget-tpnqv
	83bec775c8678       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   9c7cc15f80cd2       gcp-auth-89d5ffd79-4mqts
	03d63b34faf4d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   8e77868a00e71       csi-hostpathplugin-p9scz
	d4f8f6057f959       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   8e77868a00e71       csi-hostpathplugin-p9scz
	eb6e616a64d18       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   8e77868a00e71       csi-hostpathplugin-p9scz
	f3afbf0ccbab6       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   8e77868a00e71       csi-hostpathplugin-p9scz
	d9fcc95709693       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   8e77868a00e71       csi-hostpathplugin-p9scz
	f337ba19847a6       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   789b7e54bae56       csi-hostpath-resizer-0
	9b4d359428b3c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   8e77868a00e71       csi-hostpathplugin-p9scz
	17b09b388e268       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   9be7f801f1b0e       csi-hostpath-attacher-0
	32b0b731b1ab9       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   9bead5fbbab73       snapshot-controller-56fcc65765-2l8p5
	afb9e992b519c       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   202779f25edc0       snapshot-controller-56fcc65765-n4htf
	d188f98a7d4ff       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       11 minutes ago      Running             local-path-provisioner                   0                   54435c907db1c       local-path-provisioner-86d989889c-b4flw
	553266ef98b5b       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   a5ef2dca4fb5c       metrics-server-84c5f94fbc-69lbb
	097ade56cf036       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               11 minutes ago      Running             cloud-spanner-emulator                   0                   f46ddee9bd031       cloud-spanner-emulator-769b77f747-kvlt6
	3c7c28c36581f       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        11 minutes ago      Running             yakd                                     0                   f9583a6fd4331       yakd-dashboard-67d98fc6b-nn6kp
	b225e5832fc23       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             12 minutes ago      Exited              registry                                 0                   43f40aa3b79e7       registry-66c9cd494c-bhm68
	886ec20abe7c7       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  12 minutes ago      Running             tiller                                   0                   a2c41b1d27cb3       tiller-deploy-b48cc5f79-m992c
	c4505d044bf6a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              13 minutes ago      Exited              registry-proxy                           0                   4dc21e26fb9ae       registry-proxy-6wfzk
	5b90f47e020e6       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     13 minutes ago      Running             nvidia-device-plugin-ctr                 0                   92425cfc99259       nvidia-device-plugin-daemonset-lqwtk
	d8110fb944ef4       c69fa2e9cbf5f                                                                                                                                13 minutes ago      Running             coredns                                  0                   a0bc312e8f2fa       coredns-7c65d6cfc9-bjhkw
	9e8d0be7e1c38       6e38f40d628db                                                                                                                                13 minutes ago      Running             storage-provisioner                      0                   bf104501d6fb0       storage-provisioner
	bf9f7cc98c547       60c005f310ff3                                                                                                                                13 minutes ago      Running             kube-proxy                               0                   d72b727eb3d90       kube-proxy-5npzc
	16a28047828b5       2e96e5913fc06                                                                                                                                13 minutes ago      Running             etcd                                     0                   bcf8ecdc25bcd       etcd-ubuntu-20-agent-9
	6c315280a56d4       6bab7719df100                                                                                                                                13 minutes ago      Running             kube-apiserver                           0                   fa02e6e6bc3d8       kube-apiserver-ubuntu-20-agent-9
	bbb534914b0cd       175ffd71cce3d                                                                                                                                13 minutes ago      Running             kube-controller-manager                  0                   c27de5d77501b       kube-controller-manager-ubuntu-20-agent-9
	8b644687b5fed       9aa1fad941575                                                                                                                                13 minutes ago      Running             kube-scheduler                           0                   7df8810152c8e       kube-scheduler-ubuntu-20-agent-9
	
	
	==> coredns [d8110fb944ef] <==
	[INFO] 10.244.0.3:48919 - 51049 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000150572s
	[INFO] 10.244.0.3:47730 - 27940 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056544s
	[INFO] 10.244.0.3:47730 - 22822 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100097s
	[INFO] 10.244.0.3:60796 - 8700 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000077137s
	[INFO] 10.244.0.3:60796 - 34545 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000125409s
	[INFO] 10.244.0.3:42941 - 34948 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000064075s
	[INFO] 10.244.0.3:42941 - 61851 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000096576s
	[INFO] 10.244.0.3:58888 - 8091 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000055728s
	[INFO] 10.244.0.3:58888 - 19353 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00009498s
	[INFO] 10.244.0.3:52777 - 8132 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066605s
	[INFO] 10.244.0.3:52777 - 55239 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093161s
	[INFO] 10.244.0.23:41477 - 54215 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000326137s
	[INFO] 10.244.0.23:57146 - 42171 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0004285s
	[INFO] 10.244.0.23:37375 - 222 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000162184s
	[INFO] 10.244.0.23:34615 - 21489 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011995s
	[INFO] 10.244.0.23:46086 - 50479 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00011565s
	[INFO] 10.244.0.23:43838 - 60501 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086958s
	[INFO] 10.244.0.23:46510 - 60793 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.002724672s
	[INFO] 10.244.0.23:35961 - 54774 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004303134s
	[INFO] 10.244.0.23:45541 - 61750 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00224688s
	[INFO] 10.244.0.23:49056 - 46209 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003346387s
	[INFO] 10.244.0.23:46335 - 28658 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00234332s
	[INFO] 10.244.0.23:50532 - 46075 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004002112s
	[INFO] 10.244.0.23:39868 - 32497 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001501093s
	[INFO] 10.244.0.23:54704 - 65432 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001575905s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-9
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-9
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T18_40_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-9
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-9"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 18:40:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-9
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 18:54:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 18:49:48 +0000   Thu, 19 Sep 2024 18:40:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 18:49:48 +0000   Thu, 19 Sep 2024 18:40:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 18:49:48 +0000   Thu, 19 Sep 2024 18:40:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 18:49:48 +0000   Thu, 19 Sep 2024 18:40:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.154.0.4
	  Hostname:    ubuntu-20-agent-9
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                4894487b-7b30-e033-3a9d-c6f45b6c4cf8
	  Boot ID:                    b78bccb3-97fa-42b0-9c3e-58e81dfe8270
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     cloud-spanner-emulator-769b77f747-kvlt6      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  gadget                      gadget-tpnqv                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  gcp-auth                    gcp-auth-89d5ffd79-4mqts                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-bjhkw                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpathplugin-p9scz                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ubuntu-20-agent-9                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kube-apiserver-ubuntu-20-agent-9             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-9    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-5npzc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ubuntu-20-agent-9             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-69lbb              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         13m
	  kube-system                 nvidia-device-plugin-daemonset-lqwtk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-56fcc65765-2l8p5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-56fcc65765-n4htf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 tiller-deploy-b48cc5f79-m992c                0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          local-path-provisioner-86d989889c-b4flw      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-nn6kp               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 13m   kube-proxy       
	  Normal   Starting                 13m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m   kubelet          Node ubuntu-20-agent-9 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m   kubelet          Node ubuntu-20-agent-9 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m   kubelet          Node ubuntu-20-agent-9 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m   node-controller  Node ubuntu-20-agent-9 event: Registered Node ubuntu-20-agent-9 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 ba be 9f a2 08 08 06
	[  +0.031144] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 66 c0 47 6a 51 90 08 06
	[  +2.934112] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 26 08 a9 4d c3 34 08 06
	[  +1.873702] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 82 b2 d5 7c 91 08 06
	[  +2.179045] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 76 32 b3 cb 00 2e 08 06
	[  +6.047092] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e f5 7a 40 c7 59 08 06
	[  +0.514470] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e ea 0b cb aa 59 08 06
	[  +0.368610] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0e e3 4b 0c 57 3d 08 06
	[Sep19 18:43] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a 25 d7 ba a0 4b 08 06
	[ +54.312802] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a 5d 40 03 17 70 08 06
	[  +0.026033] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 3c f5 39 65 fb 08 06
	[Sep19 18:44] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e da a4 57 03 32 08 06
	[  +0.000497] IPv4: martian source 10.244.0.23 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 6c ba 48 66 2f 08 06
	
	
	==> etcd [16a28047828b] <==
	{"level":"info","ts":"2024-09-19T18:40:35.082847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-19T18:40:35.082878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a received MsgPreVoteResp from 82d4d36e40f9b4a at term 1"}
	{"level":"info","ts":"2024-09-19T18:40:35.082893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became candidate at term 2"}
	{"level":"info","ts":"2024-09-19T18:40:35.082902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a received MsgVoteResp from 82d4d36e40f9b4a at term 2"}
	{"level":"info","ts":"2024-09-19T18:40:35.082914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became leader at term 2"}
	{"level":"info","ts":"2024-09-19T18:40:35.082927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 82d4d36e40f9b4a elected leader 82d4d36e40f9b4a at term 2"}
	{"level":"info","ts":"2024-09-19T18:40:35.083963Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:40:35.084595Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T18:40:35.084595Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"82d4d36e40f9b4a","local-member-attributes":"{Name:ubuntu-20-agent-9 ClientURLs:[https://10.154.0.4:2379]}","request-path":"/0/members/82d4d36e40f9b4a/attributes","cluster-id":"7cf21852ad6c12ab","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T18:40:35.084619Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T18:40:35.084876Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T18:40:35.084907Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T18:40:35.084939Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7cf21852ad6c12ab","local-member-id":"82d4d36e40f9b4a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:40:35.085014Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:40:35.085041Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:40:35.085827Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:40:35.085946Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:40:35.086725Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T18:40:35.086988Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.154.0.4:2379"}
	{"level":"info","ts":"2024-09-19T18:41:09.528198Z","caller":"traceutil/trace.go:171","msg":"trace[1939599954] transaction","detail":"{read_only:false; response_revision:913; number_of_response:1; }","duration":"113.904912ms","start":"2024-09-19T18:41:09.414260Z","end":"2024-09-19T18:41:09.528165Z","steps":["trace[1939599954] 'process raft request'  (duration: 44.05081ms)","trace[1939599954] 'compare'  (duration: 69.539542ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-19T18:42:18.131297Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.054354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:42:18.131404Z","caller":"traceutil/trace.go:171","msg":"trace[990880061] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1056; }","duration":"125.190905ms","start":"2024-09-19T18:42:18.006198Z","end":"2024-09-19T18:42:18.131388Z","steps":["trace[990880061] 'range keys from in-memory index tree'  (duration: 124.983249ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:50:35.104151Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1675}
	{"level":"info","ts":"2024-09-19T18:50:35.127653Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1675,"took":"22.979252ms","hash":2918599492,"current-db-size-bytes":8228864,"current-db-size":"8.2 MB","current-db-size-in-use-bytes":4460544,"current-db-size-in-use":"4.5 MB"}
	{"level":"info","ts":"2024-09-19T18:50:35.127711Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2918599492,"revision":1675,"compact-revision":-1}
	
	
	==> gcp-auth [83bec775c867] <==
	2024/09/19 18:44:12 GCP Auth Webhook started!
	2024/09/19 18:44:28 Ready to marshal response ...
	2024/09/19 18:44:28 Ready to write response ...
	2024/09/19 18:44:29 Ready to marshal response ...
	2024/09/19 18:44:29 Ready to write response ...
	2024/09/19 18:44:51 Ready to marshal response ...
	2024/09/19 18:44:51 Ready to write response ...
	2024/09/19 18:44:51 Ready to marshal response ...
	2024/09/19 18:44:51 Ready to write response ...
	2024/09/19 18:44:51 Ready to marshal response ...
	2024/09/19 18:44:51 Ready to write response ...
	2024/09/19 18:53:04 Ready to marshal response ...
	2024/09/19 18:53:04 Ready to write response ...
	
	
	==> kernel <==
	 18:54:05 up 36 min,  0 users,  load average: 0.49, 0.38, 0.32
	Linux ubuntu-20-agent-9 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [6c315280a56d] <==
	W0919 18:43:06.760667       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.89.233:443: connect: connection refused
	W0919 18:43:34.405843       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.197.203:443: connect: connection refused
	E0919 18:43:34.405887       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.197.203:443: connect: connection refused" logger="UnhandledError"
	W0919 18:43:53.537375       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.197.203:443: connect: connection refused
	E0919 18:43:53.537413       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.197.203:443: connect: connection refused" logger="UnhandledError"
	W0919 18:43:53.554787       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.197.203:443: connect: connection refused
	E0919 18:43:53.554826       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.197.203:443: connect: connection refused" logger="UnhandledError"
	I0919 18:44:28.349882       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0919 18:44:28.368195       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0919 18:44:41.760213       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0919 18:44:41.767764       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0919 18:44:41.882075       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0919 18:44:41.914707       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0919 18:44:41.914745       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0919 18:44:41.915718       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0919 18:44:42.084460       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0919 18:44:42.102035       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0919 18:44:42.129692       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0919 18:44:42.783939       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0919 18:44:42.930218       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0919 18:44:43.068486       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0919 18:44:43.068626       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0919 18:44:43.096998       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0919 18:44:43.130280       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0919 18:44:43.305065       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [bbb534914b0c] <==
	W0919 18:52:55.958253       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:55.958298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:01.043850       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:01.043896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:06.369320       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:06.369376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:08.883446       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:08.883491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:16.969296       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:16.969344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:23.898057       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:23.898111       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:37.042367       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:37.042410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:41.434842       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:41.434883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:45.195400       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:45.195448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:49.707845       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:49.707902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:52.739000       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:52.739039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:54:03.410434       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:54:03.410476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:54:04.701777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="9.608µs"
	
	
	==> kube-proxy [bf9f7cc98c54] <==
	I0919 18:40:45.010428       1 server_linux.go:66] "Using iptables proxy"
	I0919 18:40:45.152684       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.154.0.4"]
	E0919 18:40:45.152759       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 18:40:45.207506       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 18:40:45.207576       1 server_linux.go:169] "Using iptables Proxier"
	I0919 18:40:45.210464       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 18:40:45.211002       1 server.go:483] "Version info" version="v1.31.1"
	I0919 18:40:45.211034       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:40:45.212090       1 config.go:328] "Starting node config controller"
	I0919 18:40:45.212120       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 18:40:45.219423       1 config.go:105] "Starting endpoint slice config controller"
	I0919 18:40:45.219451       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 18:40:45.219450       1 config.go:199] "Starting service config controller"
	I0919 18:40:45.219475       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 18:40:45.313808       1 shared_informer.go:320] Caches are synced for node config
	I0919 18:40:45.321930       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 18:40:45.322152       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [8b644687b5fe] <==
	W0919 18:40:35.960874       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 18:40:35.960894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:36.914392       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 18:40:36.914429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:36.938840       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 18:40:36.938885       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:36.943237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 18:40:36.943272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:37.019629       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 18:40:37.019672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:37.056203       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 18:40:37.056248       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:37.064517       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 18:40:37.064555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:37.085143       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 18:40:37.085182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:37.087099       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 18:40:37.087133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:37.172902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 18:40:37.172948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:37.212329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:40:37.212368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:37.334802       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 18:40:37.334858       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0919 18:40:39.057033       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Fri 2024-09-13 07:37:52 UTC, end at Thu 2024-09-19 18:54:05 UTC. --
	Sep 19 18:53:54 ubuntu-20-agent-9 kubelet[19808]: E0919 18:53:54.632753   19808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="8045fdc2-10d2-4c87-950b-836e19b0e656"
	Sep 19 18:53:56 ubuntu-20-agent-9 kubelet[19808]: E0919 18:53:56.635103   19808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="5a64c0f7-aa75-47db-99c0-c7a32214d3ca"
	Sep 19 18:53:58 ubuntu-20-agent-9 kubelet[19808]: I0919 18:53:58.631386   19808 scope.go:117] "RemoveContainer" containerID="0b85a20c807c241737543f2d215e651b8331e46e0b6a2b303f9e66a331c4127c"
	Sep 19 18:53:58 ubuntu-20-agent-9 kubelet[19808]: E0919 18:53:58.631627   19808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-tpnqv_gadget(ec3926e4-f2b3-4ef2-bdc1-2f1968f16525)\"" pod="gadget/gadget-tpnqv" podUID="ec3926e4-f2b3-4ef2-bdc1-2f1968f16525"
	Sep 19 18:54:04 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:04.529523   19808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5a64c0f7-aa75-47db-99c0-c7a32214d3ca-gcp-creds\") pod \"5a64c0f7-aa75-47db-99c0-c7a32214d3ca\" (UID: \"5a64c0f7-aa75-47db-99c0-c7a32214d3ca\") "
	Sep 19 18:54:04 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:04.529599   19808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c9lk\" (UniqueName: \"kubernetes.io/projected/5a64c0f7-aa75-47db-99c0-c7a32214d3ca-kube-api-access-4c9lk\") pod \"5a64c0f7-aa75-47db-99c0-c7a32214d3ca\" (UID: \"5a64c0f7-aa75-47db-99c0-c7a32214d3ca\") "
	Sep 19 18:54:04 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:04.529684   19808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a64c0f7-aa75-47db-99c0-c7a32214d3ca-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5a64c0f7-aa75-47db-99c0-c7a32214d3ca" (UID: "5a64c0f7-aa75-47db-99c0-c7a32214d3ca"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 19 18:54:04 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:04.531998   19808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a64c0f7-aa75-47db-99c0-c7a32214d3ca-kube-api-access-4c9lk" (OuterVolumeSpecName: "kube-api-access-4c9lk") pod "5a64c0f7-aa75-47db-99c0-c7a32214d3ca" (UID: "5a64c0f7-aa75-47db-99c0-c7a32214d3ca"). InnerVolumeSpecName "kube-api-access-4c9lk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:54:04 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:04.629898   19808 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4c9lk\" (UniqueName: \"kubernetes.io/projected/5a64c0f7-aa75-47db-99c0-c7a32214d3ca-kube-api-access-4c9lk\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 19 18:54:04 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:04.629932   19808 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5a64c0f7-aa75-47db-99c0-c7a32214d3ca-gcp-creds\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 19 18:54:05 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:05.032841   19808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zznjz\" (UniqueName: \"kubernetes.io/projected/d43e593d-3a8a-4415-b4cc-21dfcb93cfab-kube-api-access-zznjz\") pod \"d43e593d-3a8a-4415-b4cc-21dfcb93cfab\" (UID: \"d43e593d-3a8a-4415-b4cc-21dfcb93cfab\") "
	Sep 19 18:54:05 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:05.035219   19808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d43e593d-3a8a-4415-b4cc-21dfcb93cfab-kube-api-access-zznjz" (OuterVolumeSpecName: "kube-api-access-zznjz") pod "d43e593d-3a8a-4415-b4cc-21dfcb93cfab" (UID: "d43e593d-3a8a-4415-b4cc-21dfcb93cfab"). InnerVolumeSpecName "kube-api-access-zznjz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:54:05 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:05.133876   19808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7xxf\" (UniqueName: \"kubernetes.io/projected/d311bda0-57b4-46a9-a6b7-207e358dc87d-kube-api-access-p7xxf\") pod \"d311bda0-57b4-46a9-a6b7-207e358dc87d\" (UID: \"d311bda0-57b4-46a9-a6b7-207e358dc87d\") "
	Sep 19 18:54:05 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:05.133983   19808 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zznjz\" (UniqueName: \"kubernetes.io/projected/d43e593d-3a8a-4415-b4cc-21dfcb93cfab-kube-api-access-zznjz\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 19 18:54:05 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:05.136020   19808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d311bda0-57b4-46a9-a6b7-207e358dc87d-kube-api-access-p7xxf" (OuterVolumeSpecName: "kube-api-access-p7xxf") pod "d311bda0-57b4-46a9-a6b7-207e358dc87d" (UID: "d311bda0-57b4-46a9-a6b7-207e358dc87d"). InnerVolumeSpecName "kube-api-access-p7xxf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:54:05 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:05.234631   19808 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-p7xxf\" (UniqueName: \"kubernetes.io/projected/d311bda0-57b4-46a9-a6b7-207e358dc87d-kube-api-access-p7xxf\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 19 18:54:05 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:05.454879   19808 scope.go:117] "RemoveContainer" containerID="b225e5832fc23e45f141404ceeb488eb7263847a538791d0bded3c4cc1759b65"
	Sep 19 18:54:05 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:05.475528   19808 scope.go:117] "RemoveContainer" containerID="b225e5832fc23e45f141404ceeb488eb7263847a538791d0bded3c4cc1759b65"
	Sep 19 18:54:05 ubuntu-20-agent-9 kubelet[19808]: E0919 18:54:05.476584   19808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b225e5832fc23e45f141404ceeb488eb7263847a538791d0bded3c4cc1759b65" containerID="b225e5832fc23e45f141404ceeb488eb7263847a538791d0bded3c4cc1759b65"
	Sep 19 18:54:05 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:05.476635   19808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b225e5832fc23e45f141404ceeb488eb7263847a538791d0bded3c4cc1759b65"} err="failed to get container status \"b225e5832fc23e45f141404ceeb488eb7263847a538791d0bded3c4cc1759b65\": rpc error: code = Unknown desc = Error response from daemon: No such container: b225e5832fc23e45f141404ceeb488eb7263847a538791d0bded3c4cc1759b65"
	Sep 19 18:54:05 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:05.476666   19808 scope.go:117] "RemoveContainer" containerID="c4505d044bf6af31bc0a41a813bfa78063bd956eb4a4663babb31bdb508010c1"
	Sep 19 18:54:05 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:05.501856   19808 scope.go:117] "RemoveContainer" containerID="c4505d044bf6af31bc0a41a813bfa78063bd956eb4a4663babb31bdb508010c1"
	Sep 19 18:54:05 ubuntu-20-agent-9 kubelet[19808]: E0919 18:54:05.502817   19808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c4505d044bf6af31bc0a41a813bfa78063bd956eb4a4663babb31bdb508010c1" containerID="c4505d044bf6af31bc0a41a813bfa78063bd956eb4a4663babb31bdb508010c1"
	Sep 19 18:54:05 ubuntu-20-agent-9 kubelet[19808]: I0919 18:54:05.502867   19808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c4505d044bf6af31bc0a41a813bfa78063bd956eb4a4663babb31bdb508010c1"} err="failed to get container status \"c4505d044bf6af31bc0a41a813bfa78063bd956eb4a4663babb31bdb508010c1\": rpc error: code = Unknown desc = Error response from daemon: No such container: c4505d044bf6af31bc0a41a813bfa78063bd956eb4a4663babb31bdb508010c1"
	Sep 19 18:54:05 ubuntu-20-agent-9 kubelet[19808]: E0919 18:54:05.632399   19808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="8045fdc2-10d2-4c87-950b-836e19b0e656"
	
	
	==> storage-provisioner [9e8d0be7e1c3] <==
	I0919 18:40:45.283678       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 18:40:45.298515       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 18:40:45.298562       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 18:40:45.308417       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 18:40:45.308614       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_cc2b5815-24e9-4925-ac66-0b620adf1377!
	I0919 18:40:45.310162       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"34b00c0b-31b2-4b04-839e-23053beefa03", APIVersion:"v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-9_cc2b5815-24e9-4925-ac66-0b620adf1377 became leader
	I0919 18:40:45.409591       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_cc2b5815-24e9-4925-ac66-0b620adf1377!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-9/10.154.0.4
	Start Time:       Thu, 19 Sep 2024 18:44:51 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bnb8h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bnb8h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m15s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-9
	  Normal   Pulling    7m43s (x4 over 9m14s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m43s (x4 over 9m14s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m43s (x4 over 9m14s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m30s (x6 over 9m14s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m6s (x21 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.97s)

                                                
                                    

Test pass (105/167)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 1.78
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 1.56
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
22 TestOffline 72.43
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 227.43
29 TestAddons/serial/Volcano 38.59
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 10.48
36 TestAddons/parallel/MetricsServer 5.39
37 TestAddons/parallel/HelmTiller 10.33
39 TestAddons/parallel/CSI 63.51
40 TestAddons/parallel/Headlamp 16.93
41 TestAddons/parallel/CloudSpanner 5.27
43 TestAddons/parallel/NvidiaDevicePlugin 6.24
44 TestAddons/parallel/Yakd 10.45
45 TestAddons/StoppedEnableDisable 10.77
47 TestCertExpiration 230.2
58 TestFunctional/serial/CopySyncFile 0
59 TestFunctional/serial/StartWithProxy 30.67
60 TestFunctional/serial/AuditLog 0
61 TestFunctional/serial/SoftStart 29.54
62 TestFunctional/serial/KubeContext 0.05
63 TestFunctional/serial/KubectlGetPods 0.08
65 TestFunctional/serial/MinikubeKubectlCmd 0.11
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
67 TestFunctional/serial/ExtraConfig 38.41
68 TestFunctional/serial/ComponentHealth 0.07
69 TestFunctional/serial/LogsCmd 0.87
70 TestFunctional/serial/LogsFileCmd 0.92
71 TestFunctional/serial/InvalidService 4.17
73 TestFunctional/parallel/ConfigCmd 0.28
74 TestFunctional/parallel/DashboardCmd 4.42
75 TestFunctional/parallel/DryRun 0.17
76 TestFunctional/parallel/InternationalLanguage 0.09
77 TestFunctional/parallel/StatusCmd 0.46
80 TestFunctional/parallel/ProfileCmd/profile_not_create 0.23
81 TestFunctional/parallel/ProfileCmd/profile_list 0.21
82 TestFunctional/parallel/ProfileCmd/profile_json_output 0.22
84 TestFunctional/parallel/ServiceCmd/DeployApp 9.15
85 TestFunctional/parallel/ServiceCmd/List 0.35
86 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
87 TestFunctional/parallel/ServiceCmd/HTTPS 0.16
88 TestFunctional/parallel/ServiceCmd/Format 0.17
89 TestFunctional/parallel/ServiceCmd/URL 0.17
90 TestFunctional/parallel/ServiceCmdConnect 7.33
91 TestFunctional/parallel/AddonsCmd 0.11
92 TestFunctional/parallel/PersistentVolumeClaim 22.05
105 TestFunctional/parallel/MySQL 21.81
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 13.29
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 13.72
114 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/Version/short 0.05
119 TestFunctional/parallel/Version/components 0.4
120 TestFunctional/parallel/License 0.96
121 TestFunctional/delete_echo-server_images 0.03
122 TestFunctional/delete_my-image_image 0.02
123 TestFunctional/delete_minikube_cached_images 0.02
128 TestImageBuild/serial/Setup 13.77
129 TestImageBuild/serial/NormalBuild 2.79
130 TestImageBuild/serial/BuildWithBuildArg 0.92
131 TestImageBuild/serial/BuildWithDockerIgnore 0.71
132 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.73
136 TestJSONOutput/start/Command 29.64
137 TestJSONOutput/start/Audit 0
139 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
140 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
142 TestJSONOutput/pause/Command 0.52
143 TestJSONOutput/pause/Audit 0
145 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/unpause/Command 0.43
149 TestJSONOutput/unpause/Audit 0
151 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/stop/Command 10.44
155 TestJSONOutput/stop/Audit 0
157 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
159 TestErrorJSONOutput 0.2
164 TestMainNoArgs 0.05
165 TestMinikubeProfile 36
173 TestPause/serial/Start 27.81
174 TestPause/serial/SecondStartNoReconfiguration 32.06
175 TestPause/serial/Pause 0.54
176 TestPause/serial/VerifyStatus 0.14
177 TestPause/serial/Unpause 0.42
178 TestPause/serial/PauseAgain 0.57
179 TestPause/serial/DeletePaused 1.77
180 TestPause/serial/VerifyDeletedResources 0.07
194 TestRunningBinaryUpgrade 77.49
196 TestStoppedBinaryUpgrade/Setup 2.32
197 TestStoppedBinaryUpgrade/Upgrade 51.83
198 TestStoppedBinaryUpgrade/MinikubeLogs 0.84
199 TestKubernetesUpgrade 318.64
x
+
TestDownloadOnly/v1.20.0/json-events (1.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.784042575s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (1.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (57.502863ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:39:08
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:39:08.386577   14478 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:39:08.386688   14478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:08.386696   14478 out.go:358] Setting ErrFile to fd 2...
	I0919 18:39:08.386701   14478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:08.386856   14478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7688/.minikube/bin
	W0919 18:39:08.386976   14478 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19664-7688/.minikube/config/config.json: open /home/jenkins/minikube-integration/19664-7688/.minikube/config/config.json: no such file or directory
	I0919 18:39:08.387521   14478 out.go:352] Setting JSON to true
	I0919 18:39:08.388437   14478 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1283,"bootTime":1726769865,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:39:08.388530   14478 start.go:139] virtualization: kvm guest
	I0919 18:39:08.391026   14478 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0919 18:39:08.391149   14478 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19664-7688/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 18:39:08.391193   14478 notify.go:220] Checking for updates...
	I0919 18:39:08.392632   14478 out.go:169] MINIKUBE_LOCATION=19664
	I0919 18:39:08.393969   14478 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:39:08.395283   14478 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19664-7688/kubeconfig
	I0919 18:39:08.396617   14478 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7688/.minikube
	I0919 18:39:08.397706   14478 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (1.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.561771191s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (1.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (58.331202ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:39:10
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:39:10.472759   14630 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:39:10.472863   14630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:10.472872   14630 out.go:358] Setting ErrFile to fd 2...
	I0919 18:39:10.472877   14630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:10.473081   14630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7688/.minikube/bin
	I0919 18:39:10.473638   14630 out.go:352] Setting JSON to true
	I0919 18:39:10.474519   14630 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1286,"bootTime":1726769865,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:39:10.474615   14630 start.go:139] virtualization: kvm guest
	I0919 18:39:10.476779   14630 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0919 18:39:10.476878   14630 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19664-7688/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 18:39:10.476915   14630 notify.go:220] Checking for updates...
	I0919 18:39:10.478511   14630 out.go:169] MINIKUBE_LOCATION=19664
	I0919 18:39:10.480210   14630 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:39:10.481690   14630 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19664-7688/kubeconfig
	I0919 18:39:10.483077   14630 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7688/.minikube
	I0919 18:39:10.484599   14630 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
I0919 18:39:12.549980   14466 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:42313 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (72.43s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (1m10.747829104s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.678077142s)
--- PASS: TestOffline (72.43s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (47.535766ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (49.59718ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (227.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller: (3m47.428227206s)
--- PASS: TestAddons/Setup (227.43s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.59s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 7.923371ms
addons_test.go:905: volcano-admission stabilized in 8.159229ms
addons_test.go:897: volcano-scheduler stabilized in 8.201813ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-cwgs8" [25d99391-466f-40d8-abf2-97eb5911b387] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004440915s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-znlwf" [562d047b-cf12-47ee-879e-e70a44beab0d] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003800186s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-qbmfx" [8b269928-92a8-472f-abae-75fc209dbd4e] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003927893s
addons_test.go:932: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [267056a5-1747-46e4-966a-8ce248d21070] Pending
helpers_test.go:344: "test-job-nginx-0" [267056a5-1747-46e4-966a-8ce248d21070] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [267056a5-1747-46e4-966a-8ce248d21070] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004753054s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.236407019s)
--- PASS: TestAddons/serial/Volcano (38.59s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.48s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tpnqv" [ec3926e4-f2b3-4ef2-bdc1-2f1968f16525] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00419638s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.476153593s)
--- PASS: TestAddons/parallel/InspektorGadget (10.48s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.994459ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-69lbb" [fe8f0275-1e1a-4ccf-a704-5147f0ae5bf1] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003910379s
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.39s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.33s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.204021ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-m992c" [fe8a69b7-2aed-4388-bbfa-e21130b46e0e] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003921179s
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.01971069s)
addons_test.go:480: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I0919 18:54:32.343348   14466 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0919 18:54:32.348264   14466 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0919 18:54:32.348290   14466 kapi.go:107] duration metric: took 4.956307ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.965601ms
addons_test.go:570: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e11a32c1-b625-4ab8-8fde-beebd8a7ed26] Pending
helpers_test.go:344: "task-pv-pod" [e11a32c1-b625-4ab8-8fde-beebd8a7ed26] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e11a32c1-b625-4ab8-8fde-beebd8a7ed26] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003458524s
addons_test.go:590: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5deddb6b-06e2-4fa7-b9ee-7da04ec7a6fb] Pending
helpers_test.go:344: "task-pv-pod-restore" [5deddb6b-06e2-4fa7-b9ee-7da04ec7a6fb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5deddb6b-06e2-4fa7-b9ee-7da04ec7a6fb] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003584768s
addons_test.go:632: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.322794776s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (63.51s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-vxs2b" [dedc82d4-8ed4-4186-a2e0-2bc1ee3c792a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-vxs2b" [dedc82d4-8ed4-4186-a2e0-2bc1ee3c792a] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004148217s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.438226357s)
--- PASS: TestAddons/parallel/Headlamp (16.93s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-kvlt6" [8c1bced6-d226-4029-87fc-a697d1ac73c8] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003382812s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lqwtk" [33f623f6-129c-4e20-bd9d-5eaefadc7a64] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004556933s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-nn6kp" [ba22e957-1d5c-4164-9754-2c6a795f8a4d] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003573311s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.444143244s)
--- PASS: TestAddons/parallel/Yakd (10.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.77s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.464511079s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.77s)

                                                
                                    
x
+
TestCertExpiration (230.2s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (15.177972159s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (33.228142447s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.785652585s)
--- PASS: TestCertExpiration (230.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19664-7688/.minikube/files/etc/test/nested/copy/14466/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (30.67s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (30.673326496s)
--- PASS: TestFunctional/serial/StartWithProxy (30.67s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.54s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0919 19:00:47.603945   14466 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (29.543235886s)
functional_test.go:663: soft start took 29.544074004s for "minikube" cluster.
I0919 19:01:17.147534   14466 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (29.54s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.40923638s)
functional_test.go:761: restart took 38.409360139s for "minikube" cluster.
I0919 19:01:55.903219   14466 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (38.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.87s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd1837862521/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.92s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (171.229043ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL           |
	|-----------|-------------|-------------|-------------------------|
	| default   | invalid-svc |          80 | http://10.154.0.4:31725 |
	|-----------|-------------|-------------|-------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (43.784558ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (45.553173ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (4.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/19 19:02:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 50767: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (84.063184ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-7688/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7688/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:02:06.678837   51122 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:02:06.678969   51122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:02:06.678978   51122 out.go:358] Setting ErrFile to fd 2...
	I0919 19:02:06.678982   51122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:02:06.679163   51122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7688/.minikube/bin
	I0919 19:02:06.679744   51122 out.go:352] Setting JSON to false
	I0919 19:02:06.680793   51122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2662,"bootTime":1726769865,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:02:06.680895   51122 start.go:139] virtualization: kvm guest
	I0919 19:02:06.683575   51122 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0919 19:02:06.685071   51122 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19664-7688/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 19:02:06.685119   51122 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:02:06.685168   51122 notify.go:220] Checking for updates...
	I0919 19:02:06.687668   51122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:02:06.688994   51122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7688/kubeconfig
	I0919 19:02:06.691608   51122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7688/.minikube
	I0919 19:02:06.692922   51122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:02:06.694213   51122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:02:06.695923   51122 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 19:02:06.696228   51122 exec_runner.go:51] Run: systemctl --version
	I0919 19:02:06.699147   51122 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:02:06.709986   51122 out.go:177] * Using the none driver based on existing profile
	I0919 19:02:06.711574   51122 start.go:297] selected driver: none
	I0919 19:02:06.711597   51122 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:02:06.711744   51122 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:02:06.711781   51122 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0919 19:02:06.712139   51122 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0919 19:02:06.714357   51122 out.go:201] 
	W0919 19:02:06.715748   51122 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 19:02:06.717156   51122 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (85.5102ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-7688/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7688/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:02:06.846651   51151 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:02:06.846769   51151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:02:06.846774   51151 out.go:358] Setting ErrFile to fd 2...
	I0919 19:02:06.846778   51151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:02:06.847043   51151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7688/.minikube/bin
	I0919 19:02:06.847617   51151 out.go:352] Setting JSON to false
	I0919 19:02:06.848699   51151 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2662,"bootTime":1726769865,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:02:06.848807   51151 start.go:139] virtualization: kvm guest
	I0919 19:02:06.850963   51151 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0919 19:02:06.852643   51151 out.go:177]   - MINIKUBE_LOCATION=19664
	W0919 19:02:06.852637   51151 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19664-7688/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 19:02:06.852697   51151 notify.go:220] Checking for updates...
	I0919 19:02:06.855993   51151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:02:06.857523   51151 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7688/kubeconfig
	I0919 19:02:06.859143   51151 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7688/.minikube
	I0919 19:02:06.860458   51151 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:02:06.861730   51151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:02:06.863376   51151 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0919 19:02:06.863702   51151 exec_runner.go:51] Run: systemctl --version
	I0919 19:02:06.866541   51151 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:02:06.876702   51151 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0919 19:02:06.878328   51151 start.go:297] selected driver: none
	I0919 19:02:06.878348   51151 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:02:06.878445   51151 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:02:06.878482   51151 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0919 19:02:06.878804   51151 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0919 19:02:06.881622   51151 out.go:201] 
	W0919 19:02:06.882898   51151 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 19:02:06.884609   51151 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "164.641507ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "48.225044ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "168.066524ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "46.534323ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-gsqvd" [3afef4d1-c122-41d0-8c7c-1e385122a1b5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-gsqvd" [3afef4d1-c122-41d0-8c7c-1e385122a1b5] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003663922s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "331.837033ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.154.0.4:31399
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.154.0.4:31399
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-k4zmq" [ec120a35-a90b-4463-ae68-83781d89ecb1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-k4zmq" [ec120a35-a90b-4463-ae68-83781d89ecb1] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003375577s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.154.0.4:30553
functional_test.go:1675: http://10.154.0.4:30553: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-k4zmq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.154.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.154.0.4:30553
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.33s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [08677678-d561-4db0-804a-2ac2b4ca5c8b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.002935447s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3eb56233-1f67-432a-abd9-21e1b9bf8f7d] Pending
helpers_test.go:344: "sp-pod" [3eb56233-1f67-432a-abd9-21e1b9bf8f7d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3eb56233-1f67-432a-abd9-21e1b9bf8f7d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003623021s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml: (1.318300627s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [13663733-f583-4bb1-b8da-9aa0bc400014] Pending
helpers_test.go:344: "sp-pod" [13663733-f583-4bb1-b8da-9aa0bc400014] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [13663733-f583-4bb1-b8da-9aa0bc400014] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003854684s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.05s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-9zrbv" [8c76cab5-f53a-49f5-a5ca-9e92dcf43c0c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-9zrbv" [8c76cab5-f53a-49f5-a5ca-9e92dcf43c0c] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.004703294s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-9zrbv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-9zrbv -- mysql -ppassword -e "show databases;": exit status 1 (222.883742ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 19:03:06.286216   14466 retry.go:31] will retry after 1.070777016s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-9zrbv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-9zrbv -- mysql -ppassword -e "show databases;": exit status 1 (113.209685ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 19:03:07.470796   14466 retry.go:31] will retry after 761.636311ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-9zrbv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-9zrbv -- mysql -ppassword -e "show databases;": exit status 1 (117.122653ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 19:03:08.350704   14466 retry.go:31] will retry after 1.23892383s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-9zrbv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.285533462s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (13.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.723474583s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (13.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.96s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (13.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.770075636s)
--- PASS: TestImageBuild/serial/Setup (13.77s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (2.785615591s)
--- PASS: TestImageBuild/serial/NormalBuild (2.79s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.71s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (29.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (29.639474643s)
--- PASS: TestJSONOutput/start/Command (29.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.44s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.437494882s)
--- PASS: TestJSONOutput/stop/Command (10.44s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.330126ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1578c97e-2171-4a57-9165-8444e96d9b08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e486c724-2f60-44c9-9a85-fd10a8316c07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19664"}}
	{"specversion":"1.0","id":"b296d684-d0e4-4c1a-8aa7-03de69b2b05d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f2daae53-2059-4562-8a67-9e1e1e479e04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19664-7688/kubeconfig"}}
	{"specversion":"1.0","id":"9f8c9d20-1808-425e-b1c5-f6ecd229e2b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7688/.minikube"}}
	{"specversion":"1.0","id":"18aa2f22-624a-484c-ae01-c327e7023f10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d00448d9-59fe-45c3-9b5d-de821b24b860","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6b35efb3-d958-459e-bf06-0046b2400733","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (15.214029528s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (18.861934118s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.319650475s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (36.00s)

                                                
                                    
x
+
TestPause/serial/Start (27.81s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (27.814272808s)
--- PASS: TestPause/serial/Start (27.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (32.06s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (32.062060991s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (32.06s)

                                                
                                    
x
+
TestPause/serial/Pause (0.54s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.54s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (140.297825ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.14s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.42s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.42s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.57s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.57s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.77s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.772358452s)
--- PASS: TestPause/serial/DeletePaused (1.77s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (77.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3660722164 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3660722164 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (34.59396248s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (36.691572738s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.023181714s)
--- PASS: TestRunningBinaryUpgrade (77.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (51.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2764971186 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2764971186 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (15.342966575s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2764971186 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2764971186 -p minikube stop: (23.736159171s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.754106481s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (51.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (318.64s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (30.06038575s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.357275165s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (77.641986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m18.370207749s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (66.204272ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-7688/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7688/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.309583628s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.332199687s)
--- PASS: TestKubernetesUpgrade (318.64s)

                                                
                                    

Test skip (61/167)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
38 TestAddons/parallel/Olm 0
42 TestAddons/parallel/LocalPath 0
46 TestCertOptions 0
48 TestDockerFlags 0
49 TestForceSystemdFlag 0
50 TestForceSystemdEnv 0
51 TestDockerEnvContainerd 0
52 TestKVMDriverInstallOrUpdate 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
55 TestErrorSpam 0
64 TestFunctional/serial/CacheCmd 0
78 TestFunctional/parallel/MountCmd 0
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
97 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
98 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
99 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
100 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
101 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
102 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
103 TestFunctional/parallel/SSHCmd 0
104 TestFunctional/parallel/CpCmd 0
106 TestFunctional/parallel/FileSync 0
107 TestFunctional/parallel/CertSync 0
112 TestFunctional/parallel/DockerEnv 0
113 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/ImageCommands 0
116 TestFunctional/parallel/NonActiveRuntimeDisabled 0
124 TestGvisorAddon 0
125 TestMultiControlPlane 0
133 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
160 TestKicCustomNetwork 0
161 TestKicExistingNetwork 0
162 TestKicCustomSubnet 0
163 TestKicStaticIP 0
166 TestMountStart 0
167 TestMultiNode 0
168 TestNetworkPlugins 0
169 TestNoKubernetes 0
170 TestChangeNoneUser 0
181 TestPreload 0
182 TestScheduledStopWindows 0
183 TestScheduledStopUnix 0
184 TestSkaffold 0
187 TestStartStop/group/old-k8s-version 0.14
188 TestStartStop/group/newest-cni 0.14
189 TestStartStop/group/default-k8s-diff-port 0.14
190 TestStartStop/group/no-preload 0.14
191 TestStartStop/group/disable-driver-mounts 0.14
192 TestStartStop/group/embed-certs 0.14
193 TestInsufficientStorage 0
200 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:198: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:978: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.14s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.14s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard