Test Report: none_Linux 19696

                    
                      60137f5eb61dd17472aeb1c9d9b63bd7ae7f04e6:2024-09-24:36347
                    
                

Test fail (1/167)

Order failed test Duration
33 TestAddons/parallel/Registry 71.75
x
+
TestAddons/parallel/Registry (71.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.540706ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-jh4zk" [1fd26fe1-569a-41d8-bd27-41ea6d31c232] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002739316s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-twks8" [b1bc2a37-dafc-48f7-94a2-b80e57e12b9a] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00387983s
addons_test.go:338: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.080372302s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/23 23:50:22 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:36161               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:38 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:40 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 23 Sep 24 23:40 UTC | 23 Sep 24 23:41 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC | 23 Sep 24 23:50 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC | 23 Sep 24 23:50 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 23:38:50
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 23:38:50.270310   18432 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:38:50.270435   18432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:38:50.270445   18432 out.go:358] Setting ErrFile to fd 2...
	I0923 23:38:50.270452   18432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:38:50.270607   18432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7453/.minikube/bin
	I0923 23:38:50.271160   18432 out.go:352] Setting JSON to false
	I0923 23:38:50.272047   18432 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1279,"bootTime":1727133451,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:38:50.272131   18432 start.go:139] virtualization: kvm guest
	I0923 23:38:50.274166   18432 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 23:38:50.275376   18432 notify.go:220] Checking for updates...
	I0923 23:38:50.275382   18432 out.go:177]   - MINIKUBE_LOCATION=19696
	W0923 23:38:50.275349   18432 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19696-7453/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 23:38:50.278047   18432 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:38:50.279401   18432 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7453/kubeconfig
	I0923 23:38:50.280644   18432 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7453/.minikube
	I0923 23:38:50.281888   18432 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 23:38:50.283113   18432 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 23:38:50.284474   18432 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:38:50.294637   18432 out.go:177] * Using the none driver based on user configuration
	I0923 23:38:50.295837   18432 start.go:297] selected driver: none
	I0923 23:38:50.295850   18432 start.go:901] validating driver "none" against <nil>
	I0923 23:38:50.295861   18432 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 23:38:50.295904   18432 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0923 23:38:50.296265   18432 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0923 23:38:50.296793   18432 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 23:38:50.297039   18432 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 23:38:50.297066   18432 cni.go:84] Creating CNI manager for ""
	I0923 23:38:50.297112   18432 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 23:38:50.297129   18432 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 23:38:50.297164   18432 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:38:50.298512   18432 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0923 23:38:50.299942   18432 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/config.json ...
	I0923 23:38:50.299976   18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/config.json: {Name:mkfc6f5cf141c223524c7eb348a8ed535e6b41a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:50.300111   18432 start.go:360] acquireMachinesLock for minikube: {Name:mk6e7fa6ceaa90ef14fbf41d1e1dd11e8c8d9b57 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 23:38:50.300152   18432 start.go:364] duration metric: took 26.721µs to acquireMachinesLock for "minikube"
	I0923 23:38:50.300171   18432 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 23:38:50.300238   18432 start.go:125] createHost starting for "" (driver="none")
	I0923 23:38:50.301666   18432 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0923 23:38:50.302759   18432 exec_runner.go:51] Run: systemctl --version
	I0923 23:38:50.305235   18432 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0923 23:38:50.305281   18432 client.go:168] LocalClient.Create starting
	I0923 23:38:50.305331   18432 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7453/.minikube/certs/ca.pem
	I0923 23:38:50.305372   18432 main.go:141] libmachine: Decoding PEM data...
	I0923 23:38:50.305394   18432 main.go:141] libmachine: Parsing certificate...
	I0923 23:38:50.305462   18432 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7453/.minikube/certs/cert.pem
	I0923 23:38:50.305496   18432 main.go:141] libmachine: Decoding PEM data...
	I0923 23:38:50.305518   18432 main.go:141] libmachine: Parsing certificate...
	I0923 23:38:50.305859   18432 client.go:171] duration metric: took 570.687µs to LocalClient.Create
	I0923 23:38:50.305883   18432 start.go:167] duration metric: took 648.675µs to libmachine.API.Create "minikube"
	I0923 23:38:50.305890   18432 start.go:293] postStartSetup for "minikube" (driver="none")
	I0923 23:38:50.305932   18432 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 23:38:50.305976   18432 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 23:38:50.315767   18432 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 23:38:50.315802   18432 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 23:38:50.315815   18432 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 23:38:50.317962   18432 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0923 23:38:50.319205   18432 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7453/.minikube/addons for local assets ...
	I0923 23:38:50.319245   18432 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7453/.minikube/files for local assets ...
	I0923 23:38:50.319262   18432 start.go:296] duration metric: took 13.366723ms for postStartSetup
	I0923 23:38:50.319826   18432 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/config.json ...
	I0923 23:38:50.319957   18432 start.go:128] duration metric: took 19.710134ms to createHost
	I0923 23:38:50.319970   18432 start.go:83] releasing machines lock for "minikube", held for 19.807645ms
	I0923 23:38:50.320297   18432 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 23:38:50.320391   18432 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0923 23:38:50.322722   18432 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 23:38:50.323012   18432 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 23:38:50.331718   18432 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 23:38:50.331751   18432 start.go:495] detecting cgroup driver to use...
	I0923 23:38:50.331791   18432 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 23:38:50.331898   18432 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 23:38:50.350698   18432 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 23:38:50.359770   18432 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 23:38:50.369309   18432 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 23:38:50.369355   18432 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 23:38:50.377783   18432 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 23:38:50.386277   18432 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 23:38:50.395920   18432 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 23:38:50.404729   18432 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 23:38:50.413504   18432 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 23:38:50.422318   18432 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 23:38:50.430466   18432 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 23:38:50.438360   18432 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 23:38:50.446030   18432 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 23:38:50.452656   18432 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 23:38:50.658282   18432 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0923 23:38:50.725669   18432 start.go:495] detecting cgroup driver to use...
	I0923 23:38:50.725715   18432 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 23:38:50.725822   18432 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 23:38:50.743617   18432 exec_runner.go:51] Run: which cri-dockerd
	I0923 23:38:50.744518   18432 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 23:38:50.752162   18432 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0923 23:38:50.752183   18432 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 23:38:50.752209   18432 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 23:38:50.758892   18432 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 23:38:50.759019   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube362157187 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 23:38:50.766279   18432 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0923 23:38:50.970256   18432 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0923 23:38:51.169441   18432 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 23:38:51.169616   18432 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0923 23:38:51.169631   18432 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0923 23:38:51.169677   18432 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0923 23:38:51.177306   18432 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0923 23:38:51.177455   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1428434658 /etc/docker/daemon.json
	I0923 23:38:51.185851   18432 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 23:38:51.390709   18432 exec_runner.go:51] Run: sudo systemctl restart docker
	I0923 23:38:51.685272   18432 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 23:38:51.695857   18432 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0923 23:38:51.711715   18432 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 23:38:51.721689   18432 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0923 23:38:51.925576   18432 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0923 23:38:52.122645   18432 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 23:38:52.317327   18432 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0923 23:38:52.330835   18432 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 23:38:52.341324   18432 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 23:38:52.543478   18432 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0923 23:38:52.609748   18432 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 23:38:52.609807   18432 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0923 23:38:52.611110   18432 start.go:563] Will wait 60s for crictl version
	I0923 23:38:52.611142   18432 exec_runner.go:51] Run: which crictl
	I0923 23:38:52.611931   18432 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0923 23:38:52.642081   18432 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0923 23:38:52.642135   18432 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 23:38:52.662180   18432 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 23:38:52.684113   18432 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0923 23:38:52.684188   18432 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0923 23:38:52.686821   18432 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0923 23:38:52.688021   18432 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 23:38:52.688115   18432 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 23:38:52.688125   18432 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0923 23:38:52.688210   18432 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0923 23:38:52.688247   18432 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0923 23:38:52.734183   18432 cni.go:84] Creating CNI manager for ""
	I0923 23:38:52.734205   18432 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 23:38:52.734214   18432 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 23:38:52.734233   18432 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 23:38:52.734372   18432 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 23:38:52.734433   18432 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 23:38:52.742593   18432 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 23:38:52.742639   18432 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 23:38:52.751014   18432 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 23:38:52.751017   18432 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 23:38:52.751057   18432 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 23:38:52.751065   18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 23:38:52.751065   18432 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0923 23:38:52.751106   18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 23:38:52.762918   18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 23:38:52.798293   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3588237207 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 23:38:52.806913   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3526895639 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 23:38:52.840509   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1168637573 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 23:38:52.904582   18432 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 23:38:52.912454   18432 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0923 23:38:52.912473   18432 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 23:38:52.912507   18432 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 23:38:52.919643   18432 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0923 23:38:52.919794   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1995903575 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 23:38:52.928167   18432 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0923 23:38:52.928183   18432 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0923 23:38:52.928212   18432 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0923 23:38:52.935405   18432 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 23:38:52.935603   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3360056781 /lib/systemd/system/kubelet.service
	I0923 23:38:52.943449   18432 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0923 23:38:52.943547   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2971247699 /var/tmp/minikube/kubeadm.yaml.new
	I0923 23:38:52.951161   18432 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0923 23:38:52.952477   18432 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 23:38:53.161085   18432 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0923 23:38:53.175699   18432 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube for IP: 10.138.0.48
	I0923 23:38:53.175726   18432 certs.go:194] generating shared ca certs ...
	I0923 23:38:53.175748   18432 certs.go:226] acquiring lock for ca certs: {Name:mk3948639b4bfbef52e479ad0192b298c7e79629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:53.176002   18432 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7453/.minikube/ca.key
	I0923 23:38:53.176080   18432 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7453/.minikube/proxy-client-ca.key
	I0923 23:38:53.176094   18432 certs.go:256] generating profile certs ...
	I0923 23:38:53.176160   18432 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/client.key
	I0923 23:38:53.176177   18432 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/client.crt with IP's: []
	I0923 23:38:53.282171   18432 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/client.crt ...
	I0923 23:38:53.282198   18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/client.crt: {Name:mk19d3f7393d5385c274c75a2b427d7742ec5ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:53.282323   18432 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/client.key ...
	I0923 23:38:53.282333   18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/client.key: {Name:mkd1d47c747a30b58f8f2d3871133d0fcc0a8eac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:53.282400   18432 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0923 23:38:53.282414   18432 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0923 23:38:53.412602   18432 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0923 23:38:53.412632   18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mkd235de71de07cef6bb7559bfcd80420fdebba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:53.412766   18432 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0923 23:38:53.412777   18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mkd77e84c0de664396583aa1df4aabcb182fad66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:53.412828   18432 certs.go:381] copying /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.crt
	I0923 23:38:53.412898   18432 certs.go:385] copying /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.key
	I0923 23:38:53.412947   18432 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.key
	I0923 23:38:53.412960   18432 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0923 23:38:53.480889   18432 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.crt ...
	I0923 23:38:53.480919   18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.crt: {Name:mkabfef810d894a1c07fb8f4032a43d22b3a3c1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:53.481037   18432 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.key ...
	I0923 23:38:53.481047   18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.key: {Name:mkddb05aa7d0f840b3d5b215353336e49719ffc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:53.481201   18432 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7453/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 23:38:53.481231   18432 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7453/.minikube/certs/ca.pem (1078 bytes)
	I0923 23:38:53.481260   18432 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7453/.minikube/certs/cert.pem (1123 bytes)
	I0923 23:38:53.481286   18432 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7453/.minikube/certs/key.pem (1679 bytes)
	I0923 23:38:53.481898   18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 23:38:53.482016   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1818874626 /var/lib/minikube/certs/ca.crt
	I0923 23:38:53.490636   18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 23:38:53.490751   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3177926111 /var/lib/minikube/certs/ca.key
	I0923 23:38:53.498069   18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 23:38:53.498175   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1135613230 /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 23:38:53.505441   18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 23:38:53.505531   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1903075806 /var/lib/minikube/certs/proxy-client-ca.key
	I0923 23:38:53.512617   18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0923 23:38:53.512722   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube982993425 /var/lib/minikube/certs/apiserver.crt
	I0923 23:38:53.519683   18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 23:38:53.519829   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3881294579 /var/lib/minikube/certs/apiserver.key
	I0923 23:38:53.527203   18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 23:38:53.527312   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4243338006 /var/lib/minikube/certs/proxy-client.crt
	I0923 23:38:53.535151   18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 23:38:53.535248   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2141483979 /var/lib/minikube/certs/proxy-client.key
	I0923 23:38:53.542239   18432 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0923 23:38:53.542253   18432 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:53.542278   18432 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:53.549237   18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 23:38:53.549349   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2495434143 /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:53.556709   18432 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 23:38:53.556801   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube721483269 /var/lib/minikube/kubeconfig
	I0923 23:38:53.564306   18432 exec_runner.go:51] Run: openssl version
	I0923 23:38:53.567002   18432 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 23:38:53.574776   18432 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:53.576064   18432 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:53.576099   18432 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:53.578676   18432 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 23:38:53.586464   18432 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 23:38:53.587521   18432 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 23:38:53.587552   18432 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:38:53.587649   18432 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 23:38:53.602349   18432 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 23:38:53.611003   18432 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 23:38:53.618532   18432 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 23:38:53.639425   18432 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 23:38:53.646815   18432 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 23:38:53.646844   18432 kubeadm.go:157] found existing configuration files:
	
	I0923 23:38:53.646881   18432 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 23:38:53.654004   18432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 23:38:53.654041   18432 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 23:38:53.661523   18432 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 23:38:53.668570   18432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 23:38:53.668611   18432 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 23:38:53.675461   18432 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 23:38:53.682863   18432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 23:38:53.682896   18432 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 23:38:53.689686   18432 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 23:38:53.697813   18432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 23:38:53.697848   18432 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 23:38:53.704762   18432 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 23:38:53.735530   18432 kubeadm.go:310] W0923 23:38:53.735418   19311 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 23:38:53.736122   18432 kubeadm.go:310] W0923 23:38:53.736082   19311 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 23:38:53.737735   18432 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 23:38:53.737802   18432 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 23:38:53.828653   18432 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 23:38:53.828731   18432 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 23:38:53.828739   18432 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 23:38:53.828744   18432 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 23:38:53.839942   18432 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 23:38:53.843310   18432 out.go:235]   - Generating certificates and keys ...
	I0923 23:38:53.843348   18432 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 23:38:53.843361   18432 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 23:38:53.945679   18432 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 23:38:54.013620   18432 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 23:38:54.171505   18432 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 23:38:54.397875   18432 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 23:38:54.536475   18432 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 23:38:54.536660   18432 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0923 23:38:54.651107   18432 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 23:38:54.651226   18432 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0923 23:38:54.826594   18432 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 23:38:54.958063   18432 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 23:38:55.185496   18432 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 23:38:55.185674   18432 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 23:38:55.290988   18432 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 23:38:55.491177   18432 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 23:38:55.580863   18432 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 23:38:55.768520   18432 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 23:38:55.900360   18432 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 23:38:55.900917   18432 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 23:38:55.903139   18432 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 23:38:55.905066   18432 out.go:235]   - Booting up control plane ...
	I0923 23:38:55.905097   18432 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 23:38:55.905118   18432 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 23:38:55.905467   18432 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 23:38:55.927117   18432 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 23:38:55.931087   18432 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 23:38:55.931107   18432 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 23:38:56.149006   18432 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 23:38:56.149045   18432 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 23:38:56.650524   18432 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.498698ms
	I0923 23:38:56.650549   18432 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 23:39:01.151800   18432 kubeadm.go:310] [api-check] The API server is healthy after 4.50125584s
	I0923 23:39:01.161767   18432 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 23:39:01.171166   18432 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 23:39:01.185870   18432 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 23:39:01.185897   18432 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 23:39:01.192984   18432 kubeadm.go:310] [bootstrap-token] Using token: 8apy58.p47gjyqdfoakrmhq
	I0923 23:39:01.194627   18432 out.go:235]   - Configuring RBAC rules ...
	I0923 23:39:01.194654   18432 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 23:39:01.197083   18432 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 23:39:01.202508   18432 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 23:39:01.204670   18432 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 23:39:01.206819   18432 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 23:39:01.208888   18432 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 23:39:01.557088   18432 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 23:39:01.976738   18432 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 23:39:02.557773   18432 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 23:39:02.558566   18432 kubeadm.go:310] 
	I0923 23:39:02.558575   18432 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 23:39:02.558581   18432 kubeadm.go:310] 
	I0923 23:39:02.558586   18432 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 23:39:02.558589   18432 kubeadm.go:310] 
	I0923 23:39:02.558593   18432 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 23:39:02.558597   18432 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 23:39:02.558601   18432 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 23:39:02.558605   18432 kubeadm.go:310] 
	I0923 23:39:02.558608   18432 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 23:39:02.558611   18432 kubeadm.go:310] 
	I0923 23:39:02.558615   18432 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 23:39:02.558618   18432 kubeadm.go:310] 
	I0923 23:39:02.558622   18432 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 23:39:02.558625   18432 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 23:39:02.558628   18432 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 23:39:02.558631   18432 kubeadm.go:310] 
	I0923 23:39:02.558641   18432 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 23:39:02.558645   18432 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 23:39:02.558650   18432 kubeadm.go:310] 
	I0923 23:39:02.558653   18432 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8apy58.p47gjyqdfoakrmhq \
	I0923 23:39:02.558659   18432 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:db47f7bc1500c7cae7d7c11015704d36d474e91b604b6cfa650231ef586748b8 \
	I0923 23:39:02.558663   18432 kubeadm.go:310] 	--control-plane 
	I0923 23:39:02.558668   18432 kubeadm.go:310] 
	I0923 23:39:02.558679   18432 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 23:39:02.558683   18432 kubeadm.go:310] 
	I0923 23:39:02.558687   18432 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8apy58.p47gjyqdfoakrmhq \
	I0923 23:39:02.558691   18432 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:db47f7bc1500c7cae7d7c11015704d36d474e91b604b6cfa650231ef586748b8 
	I0923 23:39:02.561374   18432 cni.go:84] Creating CNI manager for ""
	I0923 23:39:02.561398   18432 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 23:39:02.563032   18432 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 23:39:02.564265   18432 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0923 23:39:02.574525   18432 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 23:39:02.574656   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1688956124 /etc/cni/net.d/1-k8s.conflist
	I0923 23:39:02.584150   18432 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 23:39:02.584196   18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:02.584228   18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_23T23_39_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0923 23:39:02.593188   18432 ops.go:34] apiserver oom_adj: -16
	I0923 23:39:02.653032   18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:03.153273   18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:03.653372   18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:04.153389   18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:04.653242   18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:05.154035   18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:05.653420   18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:06.154011   18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:06.653107   18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:39:06.719087   18432 kubeadm.go:1113] duration metric: took 4.134931087s to wait for elevateKubeSystemPrivileges
	I0923 23:39:06.719125   18432 kubeadm.go:394] duration metric: took 13.131573818s to StartCluster
	I0923 23:39:06.719148   18432 settings.go:142] acquiring lock: {Name:mk8828190f1928b74029f5e970e6ecd99a25cc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:39:06.719217   18432 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7453/kubeconfig
	I0923 23:39:06.719833   18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/kubeconfig: {Name:mka6608b58d27d209fca19aaae65767ddd8ef430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:39:06.720048   18432 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 23:39:06.720097   18432 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 23:39:06.720224   18432 addons.go:69] Setting yakd=true in profile "minikube"
	I0923 23:39:06.720239   18432 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0923 23:39:06.720248   18432 addons.go:234] Setting addon yakd=true in "minikube"
	I0923 23:39:06.720253   18432 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0923 23:39:06.720253   18432 addons.go:69] Setting volcano=true in profile "minikube"
	I0923 23:39:06.720250   18432 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0923 23:39:06.720273   18432 addons.go:234] Setting addon volcano=true in "minikube"
	I0923 23:39:06.720289   18432 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0923 23:39:06.720304   18432 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0923 23:39:06.720310   18432 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0923 23:39:06.720318   18432 host.go:66] Checking if "minikube" exists ...
	I0923 23:39:06.720321   18432 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0923 23:39:06.720322   18432 addons.go:69] Setting registry=true in profile "minikube"
	I0923 23:39:06.720312   18432 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0923 23:39:06.720335   18432 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0923 23:39:06.720337   18432 addons.go:234] Setting addon registry=true in "minikube"
	I0923 23:39:06.720339   18432 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0923 23:39:06.720338   18432 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 23:39:06.720345   18432 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0923 23:39:06.720351   18432 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0923 23:39:06.720362   18432 host.go:66] Checking if "minikube" exists ...
	I0923 23:39:06.720371   18432 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0923 23:39:06.720376   18432 host.go:66] Checking if "minikube" exists ...
	I0923 23:39:06.720288   18432 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0923 23:39:06.720413   18432 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0923 23:39:06.720339   18432 mustload.go:65] Loading cluster: minikube
	I0923 23:39:06.720437   18432 host.go:66] Checking if "minikube" exists ...
	I0923 23:39:06.720356   18432 host.go:66] Checking if "minikube" exists ...
	I0923 23:39:06.720672   18432 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 23:39:06.720280   18432 host.go:66] Checking if "minikube" exists ...
	I0923 23:39:06.721066   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:06.721066   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:06.720229   18432 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0923 23:39:06.720325   18432 host.go:66] Checking if "minikube" exists ...
	I0923 23:39:06.721467   18432 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0923 23:39:06.721519   18432 host.go:66] Checking if "minikube" exists ...
	I0923 23:39:06.721539   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:06.721559   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:06.721617   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:06.721907   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:06.721930   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:06.721962   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:06.722079   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:06.722102   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:06.722141   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:06.722323   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:06.722343   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:06.722374   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:06.720280   18432 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0923 23:39:06.723404   18432 host.go:66] Checking if "minikube" exists ...
	I0923 23:39:06.721080   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:06.720280   18432 host.go:66] Checking if "minikube" exists ...
	I0923 23:39:06.723861   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:06.723907   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:06.723958   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:06.724745   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:06.724778   18432 out.go:177] * Configuring local host environment ...
	I0923 23:39:06.720327   18432 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0923 23:39:06.725452   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:06.725462   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:06.725487   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:06.725618   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:06.725645   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:06.725649   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:06.725689   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:06.724791   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:06.721472   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:06.726266   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:06.726279   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:06.726281   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:06.726311   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0923 23:39:06.726342   18432 out.go:270] * 
	W0923 23:39:06.726361   18432 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0923 23:39:06.726375   18432 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0923 23:39:06.726385   18432 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0923 23:39:06.726391   18432 out.go:270] * 
	W0923 23:39:06.726429   18432 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0923 23:39:06.726447   18432 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0923 23:39:06.726455   18432 out.go:270] * 
	W0923 23:39:06.726484   18432 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0923 23:39:06.726724   18432 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0923 23:39:06.726733   18432 out.go:270] * 
	W0923 23:39:06.726741   18432 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0923 23:39:06.726779   18432 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 23:39:06.726340   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:06.727921   18432 out.go:177] * Verifying Kubernetes components...
	I0923 23:39:06.729794   18432 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 23:39:06.740819   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:06.742516   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:06.742883   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:06.744466   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:06.760037   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:06.760169   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:06.760197   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:06.760482   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:06.760507   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:06.760540   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:06.761984   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:06.762007   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:06.762043   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:06.771939   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:06.772013   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:06.773637   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:06.773719   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:06.774318   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:06.774369   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:06.785246   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:06.785320   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:06.785343   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:06.785450   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:06.785470   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:06.785488   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:06.786334   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:06.786397   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:06.787073   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:06.787100   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:06.792679   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:06.793230   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:06.793259   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:06.794669   18432 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 23:39:06.794723   18432 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 23:39:06.794743   18432 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 23:39:06.796161   18432 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 23:39:06.796192   18432 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 23:39:06.798648   18432 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 23:39:06.799846   18432 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 23:39:06.801013   18432 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 23:39:06.801691   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:06.801738   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:06.802262   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:06.802323   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:06.802491   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:06.802528   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:06.803422   18432 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 23:39:06.804804   18432 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 23:39:06.804812   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:06.805962   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:06.807241   18432 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 23:39:06.807891   18432 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 23:39:06.807920   18432 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 23:39:06.808052   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2924540067 /etc/kubernetes/addons/ig-namespace.yaml
	I0923 23:39:06.808268   18432 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0923 23:39:06.808282   18432 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 23:39:06.808300   18432 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 23:39:06.808320   18432 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 23:39:06.808324   18432 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 23:39:06.808476   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube414764368 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 23:39:06.809509   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:06.809550   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:06.809621   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:06.809664   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:06.809771   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:06.809811   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:06.814895   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:06.814943   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:06.820580   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:06.820603   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:06.823904   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:06.823956   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:06.825262   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:06.826154   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:06.826175   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:06.826481   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:06.826503   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:06.826935   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:06.826952   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:06.826970   18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 23:39:06.827101   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube559590903 /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 23:39:06.827140   18432 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 23:39:06.828138   18432 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 23:39:06.828187   18432 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 23:39:06.828344   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3671716264 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 23:39:06.828447   18432 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 23:39:06.828468   18432 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 23:39:06.828630   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3001632555 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 23:39:06.829740   18432 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 23:39:06.829766   18432 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 23:39:06.829902   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3417521862 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 23:39:06.830102   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:06.830117   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:06.831140   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:06.831185   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:06.833619   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:06.834900   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:06.835145   18432 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0923 23:39:06.835183   18432 host.go:66] Checking if "minikube" exists ...
	I0923 23:39:06.836502   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:06.837088   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:06.837983   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:06.844083   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:06.838780   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:06.844166   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:06.844217   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:06.842605   18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 23:39:06.843573   18432 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 23:39:06.844575   18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 23:39:06.843605   18432 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 23:39:06.844639   18432 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 23:39:06.843617   18432 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 23:39:06.845240   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3451982674 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 23:39:06.845265   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1270455008 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 23:39:06.845541   18432 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 23:39:06.846476   18432 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 23:39:06.846507   18432 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 23:39:06.847150   18432 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 23:39:06.847177   18432 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 23:39:06.847306   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1294110379 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 23:39:06.847740   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:06.847762   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:06.848077   18432 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 23:39:06.848103   18432 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 23:39:06.848208   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2507528624 /etc/kubernetes/addons/yakd-ns.yaml
	I0923 23:39:06.848401   18432 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 23:39:06.848423   18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 23:39:06.848728   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3231521951 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 23:39:06.848924   18432 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 23:39:06.848946   18432 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 23:39:06.849054   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2557109985 /etc/kubernetes/addons/ig-role.yaml
	I0923 23:39:06.851711   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:06.851956   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:06.855694   18432 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 23:39:06.855696   18432 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 23:39:06.860776   18432 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 23:39:06.860810   18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 23:39:06.860933   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1389798496 /etc/kubernetes/addons/deployment.yaml
	I0923 23:39:06.862698   18432 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 23:39:06.862918   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:06.862939   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:06.868687   18432 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 23:39:06.868716   18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 23:39:06.868858   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3302855751 /etc/kubernetes/addons/registry-rc.yaml
	I0923 23:39:06.873084   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:06.873107   18432 host.go:66] Checking if "minikube" exists ...
	I0923 23:39:06.876751   18432 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 23:39:06.876777   18432 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 23:39:06.876911   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube801791274 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 23:39:06.878473   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:06.878526   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:06.881301   18432 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 23:39:06.881323   18432 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 23:39:06.881427   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1946111745 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 23:39:06.884379   18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 23:39:06.886585   18432 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 23:39:06.886609   18432 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 23:39:06.886721   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3958992613 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 23:39:06.889033   18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 23:39:06.892041   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:06.892063   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:06.892457   18432 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 23:39:06.892480   18432 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 23:39:06.892603   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3191309814 /etc/kubernetes/addons/yakd-sa.yaml
	I0923 23:39:06.898188   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:06.898210   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:06.898531   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:06.901243   18432 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 23:39:06.901268   18432 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 23:39:06.901373   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3857214416 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 23:39:06.903112   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:06.903866   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:06.904026   18432 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0923 23:39:06.905040   18432 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0923 23:39:06.905072   18432 host.go:66] Checking if "minikube" exists ...
	I0923 23:39:06.905603   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:06.905621   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:06.905651   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:06.907984   18432 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0923 23:39:06.910887   18432 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0923 23:39:06.913355   18432 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 23:39:06.913392   18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0923 23:39:06.913876   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1734807291 /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 23:39:06.918550   18432 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 23:39:06.918581   18432 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 23:39:06.918696   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube690570401 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 23:39:06.919332   18432 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 23:39:06.919359   18432 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 23:39:06.919472   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3858344043 /etc/kubernetes/addons/yakd-crb.yaml
	I0923 23:39:06.924516   18432 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 23:39:06.924542   18432 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 23:39:06.924685   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube215688014 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 23:39:06.928641   18432 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 23:39:06.928675   18432 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 23:39:06.928789   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3260173872 /etc/kubernetes/addons/registry-svc.yaml
	I0923 23:39:06.934067   18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 23:39:06.934327   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:06.935575   18432 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 23:39:06.935609   18432 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 23:39:06.935707   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1037723576 /etc/kubernetes/addons/yakd-svc.yaml
	I0923 23:39:06.938890   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:06.938964   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:06.940690   18432 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 23:39:06.940715   18432 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 23:39:06.940816   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube79155311 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 23:39:06.942190   18432 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 23:39:06.942230   18432 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 23:39:06.942345   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4114817471 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 23:39:06.945849   18432 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 23:39:06.945874   18432 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 23:39:06.945991   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2151043050 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 23:39:06.953050   18432 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 23:39:06.953078   18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 23:39:06.953199   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3891720001 /etc/kubernetes/addons/registry-proxy.yaml
	I0923 23:39:06.954790   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:06.954851   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:06.959232   18432 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 23:39:06.959259   18432 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 23:39:06.959382   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2050564571 /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 23:39:06.968039   18432 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 23:39:06.968064   18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 23:39:06.968175   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1091340882 /etc/kubernetes/addons/yakd-dp.yaml
	I0923 23:39:06.972151   18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 23:39:06.977110   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:06.977136   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:06.980817   18432 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 23:39:06.980851   18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 23:39:06.981658   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:06.981698   18432 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 23:39:06.981710   18432 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0923 23:39:06.981717   18432 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0923 23:39:06.981725   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2665118350 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 23:39:06.981752   18432 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0923 23:39:06.987388   18432 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 23:39:06.987416   18432 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 23:39:06.987529   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1138063822 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 23:39:06.991400   18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 23:39:06.999292   18432 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 23:39:06.999322   18432 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 23:39:06.999456   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1860913612 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 23:39:07.001249   18432 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 23:39:07.001381   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4223334441 /etc/kubernetes/addons/storageclass.yaml
	I0923 23:39:07.001437   18432 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 23:39:07.001463   18432 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 23:39:07.001945   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2361636232 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 23:39:07.003454   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:07.003480   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:07.009215   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:07.012303   18432 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 23:39:07.014129   18432 out.go:177]   - Using image docker.io/busybox:stable
	I0923 23:39:07.015539   18432 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 23:39:07.015573   18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 23:39:07.015705   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2554745198 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 23:39:07.018480   18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 23:39:07.021258   18432 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 23:39:07.021283   18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 23:39:07.021396   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2809208321 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 23:39:07.032669   18432 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:39:07.032699   18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 23:39:07.032812   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1623682355 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:39:07.036622   18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 23:39:07.041372   18432 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 23:39:07.041405   18432 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 23:39:07.041518   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube24433330 /etc/kubernetes/addons/ig-crd.yaml
	I0923 23:39:07.042501   18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 23:39:07.073625   18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:39:07.086049   18432 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 23:39:07.086084   18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 23:39:07.086205   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3932984417 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 23:39:07.087480   18432 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 23:39:07.087512   18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 23:39:07.087646   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube147410042 /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 23:39:07.119854   18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 23:39:07.165787   18432 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0923 23:39:07.222326   18432 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 23:39:07.222363   18432 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 23:39:07.222477   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1296068170 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 23:39:07.241784   18432 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0923 23:39:07.245050   18432 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0923 23:39:07.245071   18432 node_ready.go:38] duration metric: took 3.25547ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0923 23:39:07.245081   18432 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 23:39:07.260165   18432 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:07.273022   18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 23:39:07.534380   18432 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0923 23:39:07.715409   18432 addons.go:475] Verifying addon registry=true in "minikube"
	I0923 23:39:07.717282   18432 out.go:177] * Verifying registry addon...
	I0923 23:39:07.720495   18432 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 23:39:07.726316   18432 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 23:39:07.726340   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:07.839991   18432 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0923 23:39:07.971761   18432 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0923 23:39:08.040127   18432 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0923 23:39:08.232025   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:08.253708   18432 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.133761281s)
	I0923 23:39:08.323991   18432 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.287327075s)
	I0923 23:39:08.733123   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:08.740375   18432 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.666695369s)
	W0923 23:39:08.740416   18432 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 23:39:08.740442   18432 retry.go:31] will retry after 219.083787ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 23:39:08.963513   18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:39:09.225770   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:09.270117   18432 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:09.726328   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:09.928156   18432 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.994026972s)
	I0923 23:39:10.181443   18432 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.908362172s)
	I0923 23:39:10.181478   18432 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0923 23:39:10.185061   18432 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 23:39:10.187609   18432 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 23:39:10.193237   18432 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 23:39:10.193259   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:10.224661   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:10.692702   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:10.724859   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:11.192316   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:11.224183   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:11.692272   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:11.724201   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:11.765789   18432 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:11.840227   18432 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.876658089s)
	I0923 23:39:12.192974   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:12.223967   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:12.265359   18432 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:12.265381   18432 pod_ready.go:82] duration metric: took 5.005186338s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:12.265393   18432 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:12.692862   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:12.723929   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:13.193570   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:13.224267   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:13.692069   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:13.791398   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:13.922243   18432 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 23:39:13.922399   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1371447207 /var/lib/minikube/google_application_credentials.json
	I0923 23:39:13.931942   18432 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 23:39:13.932039   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3333738616 /var/lib/minikube/google_cloud_project
	I0923 23:39:13.941782   18432 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0923 23:39:13.941824   18432 host.go:66] Checking if "minikube" exists ...
	I0923 23:39:13.942284   18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0923 23:39:13.942300   18432 api_server.go:166] Checking apiserver status ...
	I0923 23:39:13.942322   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:13.960256   18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
	I0923 23:39:13.969734   18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
	I0923 23:39:13.969779   18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
	I0923 23:39:13.978287   18432 api_server.go:204] freezer state: "THAWED"
	I0923 23:39:13.978309   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:14.106768   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:14.106839   18432 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 23:39:14.165180   18432 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 23:39:14.192183   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:14.224069   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:14.235499   18432 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 23:39:14.270435   18432 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:14.293575   18432 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 23:39:14.293654   18432 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 23:39:14.293821   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube132876625 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 23:39:14.303855   18432 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 23:39:14.303879   18432 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 23:39:14.303966   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3630155931 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 23:39:14.313033   18432 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 23:39:14.313058   18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 23:39:14.313150   18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3471904788 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 23:39:14.320727   18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 23:39:14.691900   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:14.708272   18432 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0923 23:39:14.709916   18432 out.go:177] * Verifying gcp-auth addon...
	I0923 23:39:14.712116   18432 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 23:39:14.790727   18432 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 23:39:14.791288   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:15.191600   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:15.223733   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:15.692223   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:15.792263   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:16.191835   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:16.223471   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:16.270879   18432 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:16.270901   18432 pod_ready.go:82] duration metric: took 4.005499205s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:16.270914   18432 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:16.274783   18432 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:16.274803   18432 pod_ready.go:82] duration metric: took 3.882154ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:16.274813   18432 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k9p26" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:16.278663   18432 pod_ready.go:93] pod "kube-proxy-k9p26" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:16.278682   18432 pod_ready.go:82] duration metric: took 3.86294ms for pod "kube-proxy-k9p26" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:16.278690   18432 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:16.282632   18432 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:16.282650   18432 pod_ready.go:82] duration metric: took 3.953566ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:16.282662   18432 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2wnr8" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:16.286261   18432 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-2wnr8" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:16.286276   18432 pod_ready.go:82] duration metric: took 3.607653ms for pod "nvidia-device-plugin-daemonset-2wnr8" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:16.286285   18432 pod_ready.go:39] duration metric: took 9.04119029s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 23:39:16.286304   18432 api_server.go:52] waiting for apiserver process to appear ...
	I0923 23:39:16.286363   18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:16.304812   18432 api_server.go:72] duration metric: took 9.57797864s to wait for apiserver process to appear ...
	I0923 23:39:16.304838   18432 api_server.go:88] waiting for apiserver healthz status ...
	I0923 23:39:16.304859   18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0923 23:39:16.308958   18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0923 23:39:16.309815   18432 api_server.go:141] control plane version: v1.31.1
	I0923 23:39:16.309838   18432 api_server.go:131] duration metric: took 4.992795ms to wait for apiserver health ...
	I0923 23:39:16.309847   18432 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 23:39:16.475191   18432 system_pods.go:59] 16 kube-system pods found
	I0923 23:39:16.475222   18432 system_pods.go:61] "coredns-7c65d6cfc9-48st5" [d679e0bc-9afa-45d5-8d47-aa413a0cf466] Running
	I0923 23:39:16.475233   18432 system_pods.go:61] "csi-hostpath-attacher-0" [c8ff58d3-6250-4451-9018-4b11f9fec10d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 23:39:16.475242   18432 system_pods.go:61] "csi-hostpath-resizer-0" [2e6f2fcb-0de8-48b2-9727-cfe783496221] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 23:39:16.475255   18432 system_pods.go:61] "csi-hostpathplugin-h6hck" [94d367c6-48a7-48f2-8752-2e842cd7aba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 23:39:16.475262   18432 system_pods.go:61] "etcd-ubuntu-20-agent-2" [af379b5d-6311-48f9-9300-2c244eb7c693] Running
	I0923 23:39:16.475269   18432 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [8c1f258a-baa6-4a1f-9783-0fdfc0c40cb8] Running
	I0923 23:39:16.475274   18432 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [d567df93-d95c-4845-845b-799bf9f14489] Running
	I0923 23:39:16.475278   18432 system_pods.go:61] "kube-proxy-k9p26" [5e7867c1-dea7-4107-bd2a-995730bcc143] Running
	I0923 23:39:16.475283   18432 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [1d54e9fe-7ed0-45b2-bf4d-77eb49e6f2ce] Running
	I0923 23:39:16.475290   18432 system_pods.go:61] "metrics-server-84c5f94fbc-kfb6d" [ac1f00cd-4fff-4140-a995-8627eed03faf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 23:39:16.475296   18432 system_pods.go:61] "nvidia-device-plugin-daemonset-2wnr8" [1965e7c3-c30f-45a0-9555-6b2c4506d582] Running
	I0923 23:39:16.475305   18432 system_pods.go:61] "registry-66c9cd494c-jh4zk" [1fd26fe1-569a-41d8-bd27-41ea6d31c232] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 23:39:16.475316   18432 system_pods.go:61] "registry-proxy-twks8" [b1bc2a37-dafc-48f7-94a2-b80e57e12b9a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 23:39:16.475334   18432 system_pods.go:61] "snapshot-controller-56fcc65765-5hqd5" [d8a0657b-64c7-4669-a78d-336260cb986c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:16.475346   18432 system_pods.go:61] "snapshot-controller-56fcc65765-68n75" [8076a2dd-75ee-4755-be9c-da981c1711e1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:16.475353   18432 system_pods.go:61] "storage-provisioner" [9b61b607-d66c-485c-abe5-004021445c34] Running
	I0923 23:39:16.475363   18432 system_pods.go:74] duration metric: took 165.507567ms to wait for pod list to return data ...
	I0923 23:39:16.475372   18432 default_sa.go:34] waiting for default service account to be created ...
	I0923 23:39:16.669368   18432 default_sa.go:45] found service account: "default"
	I0923 23:39:16.669394   18432 default_sa.go:55] duration metric: took 194.015552ms for default service account to be created ...
	I0923 23:39:16.669405   18432 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 23:39:16.692336   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:16.724190   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:16.876135   18432 system_pods.go:86] 16 kube-system pods found
	I0923 23:39:16.876165   18432 system_pods.go:89] "coredns-7c65d6cfc9-48st5" [d679e0bc-9afa-45d5-8d47-aa413a0cf466] Running
	I0923 23:39:16.876178   18432 system_pods.go:89] "csi-hostpath-attacher-0" [c8ff58d3-6250-4451-9018-4b11f9fec10d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 23:39:16.876200   18432 system_pods.go:89] "csi-hostpath-resizer-0" [2e6f2fcb-0de8-48b2-9727-cfe783496221] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 23:39:16.876224   18432 system_pods.go:89] "csi-hostpathplugin-h6hck" [94d367c6-48a7-48f2-8752-2e842cd7aba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 23:39:16.876233   18432 system_pods.go:89] "etcd-ubuntu-20-agent-2" [af379b5d-6311-48f9-9300-2c244eb7c693] Running
	I0923 23:39:16.876245   18432 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [8c1f258a-baa6-4a1f-9783-0fdfc0c40cb8] Running
	I0923 23:39:16.876252   18432 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [d567df93-d95c-4845-845b-799bf9f14489] Running
	I0923 23:39:16.876260   18432 system_pods.go:89] "kube-proxy-k9p26" [5e7867c1-dea7-4107-bd2a-995730bcc143] Running
	I0923 23:39:16.876266   18432 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [1d54e9fe-7ed0-45b2-bf4d-77eb49e6f2ce] Running
	I0923 23:39:16.876277   18432 system_pods.go:89] "metrics-server-84c5f94fbc-kfb6d" [ac1f00cd-4fff-4140-a995-8627eed03faf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 23:39:16.876282   18432 system_pods.go:89] "nvidia-device-plugin-daemonset-2wnr8" [1965e7c3-c30f-45a0-9555-6b2c4506d582] Running
	I0923 23:39:16.876294   18432 system_pods.go:89] "registry-66c9cd494c-jh4zk" [1fd26fe1-569a-41d8-bd27-41ea6d31c232] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 23:39:16.876307   18432 system_pods.go:89] "registry-proxy-twks8" [b1bc2a37-dafc-48f7-94a2-b80e57e12b9a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 23:39:16.876319   18432 system_pods.go:89] "snapshot-controller-56fcc65765-5hqd5" [d8a0657b-64c7-4669-a78d-336260cb986c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:16.876329   18432 system_pods.go:89] "snapshot-controller-56fcc65765-68n75" [8076a2dd-75ee-4755-be9c-da981c1711e1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:16.876336   18432 system_pods.go:89] "storage-provisioner" [9b61b607-d66c-485c-abe5-004021445c34] Running
	I0923 23:39:16.876344   18432 system_pods.go:126] duration metric: took 206.933146ms to wait for k8s-apps to be running ...
	I0923 23:39:16.876358   18432 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 23:39:16.876408   18432 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0923 23:39:16.892504   18432 system_svc.go:56] duration metric: took 16.137748ms WaitForService to wait for kubelet
	I0923 23:39:16.892533   18432 kubeadm.go:582] duration metric: took 10.16570392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 23:39:16.892558   18432 node_conditions.go:102] verifying NodePressure condition ...
	I0923 23:39:17.070022   18432 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0923 23:39:17.070054   18432 node_conditions.go:123] node cpu capacity is 8
	I0923 23:39:17.070067   18432 node_conditions.go:105] duration metric: took 177.503592ms to run NodePressure ...
	I0923 23:39:17.070080   18432 start.go:241] waiting for startup goroutines ...
	I0923 23:39:17.216332   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:17.316382   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:17.692120   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:17.792614   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:18.192784   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:18.223025   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:18.691562   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:18.723416   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:19.194346   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:19.294030   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:19.691069   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:19.723771   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:20.192176   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:20.223909   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:20.692794   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:20.723470   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:21.193180   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:21.224390   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:21.692832   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:21.792568   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:22.192434   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:22.223509   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:22.692853   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:22.723726   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:23.192626   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:23.224712   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:23.692906   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:23.723678   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:24.192473   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:24.223258   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:24.692103   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:24.723978   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:25.192336   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:25.292291   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:25.691571   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:25.723694   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:26.192462   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:26.246005   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:26.691729   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:26.723514   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:27.191684   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:27.223122   18432 kapi.go:107] duration metric: took 19.502629386s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 23:39:27.692737   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:28.192690   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:28.692075   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:29.193410   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:29.691597   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:30.192393   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:30.691681   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:31.191296   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:31.691931   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:32.194863   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:32.691498   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:33.192361   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:33.692299   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:34.192000   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:34.692419   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:35.191641   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:35.692918   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:36.192362   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:36.691476   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:37.191757   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:37.692404   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:38.192158   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:38.693252   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:39.193949   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:39.692581   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:40.192828   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:40.693165   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:41.191766   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:41.692471   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:42.215665   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:42.692365   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:43.191464   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:43.692155   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:44.191831   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:44.692392   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:45.192481   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:45.692916   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:46.191724   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:46.692239   18432 kapi.go:107] duration metric: took 36.504628597s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 23:39:56.216092   18432 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 23:39:56.216113   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:56.715405   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:57.215731   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:57.715701   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:58.215390   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:58.715137   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:59.215180   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:39:59.715278   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:00.215042   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:00.715066   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:01.215017   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:01.715811   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:02.215953   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:02.715771   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:03.215704   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:03.715507   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:04.215734   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:04.715585   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:05.215480   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:05.716019   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:06.215355   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:06.715100   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:07.215530   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:07.715695   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:08.215477   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:08.715664   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:09.215462   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:09.715496   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:10.215207   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:10.715529   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:11.215670   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:11.715827   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:12.215453   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:12.715372   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:13.215030   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:13.715660   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:14.215673   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:14.715349   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:15.215117   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:15.715414   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:16.215132   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:16.715481   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:17.215570   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:17.715439   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:18.215767   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:18.716284   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:19.215632   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:19.735946   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:20.215341   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:20.715598   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:21.215481   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:21.715536   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:22.215523   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:22.715465   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:23.215134   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:23.716198   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:24.215294   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:24.714766   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:25.215416   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:25.715836   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:26.215861   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:26.715470   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:27.215573   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:27.715734   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:28.215953   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:28.716140   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:29.215581   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:29.715490   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:30.215233   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:30.715388   18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:31.215442   18432 kapi.go:107] duration metric: took 1m16.503323021s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 23:40:31.217054   18432 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0923 23:40:31.218297   18432 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 23:40:31.219525   18432 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 23:40:31.220904   18432 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, yakd, metrics-server, inspektor-gadget, storage-provisioner-rancher, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0923 23:40:31.222355   18432 addons.go:510] duration metric: took 1m24.502265042s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass yakd metrics-server inspektor-gadget storage-provisioner-rancher volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0923 23:40:31.222394   18432 start.go:246] waiting for cluster config update ...
	I0923 23:40:31.222413   18432 start.go:255] writing updated cluster config ...
	I0923 23:40:31.222679   18432 exec_runner.go:51] Run: rm -f paused
	I0923 23:40:31.267257   18432 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 23:40:31.268917   18432 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Thu 2024-08-15 05:18:14 UTC, end at Mon 2024-09-23 23:50:22 UTC. --
	Sep 23 23:42:43 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:42:43.145372192Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=c9a5deaf674a17a9 traceID=2aba2725a5071a96cb2fbdcd3c2db75c
	Sep 23 23:44:08 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:44:08.084064973Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=5868ea084aa34a52 traceID=acdec5092f371a52f1d12e6b63f998f1
	Sep 23 23:44:08 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:44:08.086229392Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=5868ea084aa34a52 traceID=acdec5092f371a52f1d12e6b63f998f1
	Sep 23 23:45:18 ubuntu-20-agent-2 cri-dockerd[18979]: time="2024-09-23T23:45:18Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 23:45:19 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:45:19.414401073Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 23 23:45:19 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:45:19.414457526Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 23 23:45:19 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:45:19.416206399Z" level=error msg="Error running exec edf08d6a51874c2cc307dba7aafd45ecd1649de40226fcd4639e8d561491d403 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=8357d8297aafdea0 traceID=e2228613a7e517f91b777742dad825b3
	Sep 23 23:45:19 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:45:19.479647904Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 23 23:45:19 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:45:19.479647939Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 23 23:45:19 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:45:19.481392045Z" level=error msg="Error running exec 5e89eaea9880caf4c543e16f8d9b8bd6666dafe9d337451b531ddb99b0bd5fd6 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=132479b1a7a19163 traceID=7bde93eb3e1cd634caf0b80fe9b11aa4
	Sep 23 23:45:19 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:45:19.615285814Z" level=info msg="ignoring event" container=83dfa51e56c75dfb4b0faefa8ea4ceae7d1479e97caeb54566a9ddce5e26bf57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:46:57 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:46:57.099934441Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=d5482e3990f09be6 traceID=46ea8e0bad4f3739a365ffcb07300c3f
	Sep 23 23:46:57 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:46:57.102318894Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=d5482e3990f09be6 traceID=46ea8e0bad4f3739a365ffcb07300c3f
	Sep 23 23:49:22 ubuntu-20-agent-2 cri-dockerd[18979]: time="2024-09-23T23:49:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a27ef64abfb24c0f9177fca73fcdc9d1332c537d5cd6a37974dff83d28718954/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 23 23:49:22 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:49:22.765133118Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2b6bbcec57dca1f3 traceID=742e77a588309ed738e63d2c982cdb67
	Sep 23 23:49:22 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:49:22.767151099Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2b6bbcec57dca1f3 traceID=742e77a588309ed738e63d2c982cdb67
	Sep 23 23:49:34 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:49:34.089611907Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2d70a1dc9c2856d1 traceID=53ebcb273cf7d25966b7f8e7b77225fd
	Sep 23 23:49:34 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:49:34.091487335Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2d70a1dc9c2856d1 traceID=53ebcb273cf7d25966b7f8e7b77225fd
	Sep 23 23:50:00 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:50:00.095039419Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=79911499c8fe6a0b traceID=4c0206d04cecb99a7480c331c5b6cc0c
	Sep 23 23:50:00 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:50:00.097183329Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=79911499c8fe6a0b traceID=4c0206d04cecb99a7480c331c5b6cc0c
	Sep 23 23:50:22 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:50:22.180905355Z" level=info msg="ignoring event" container=a27ef64abfb24c0f9177fca73fcdc9d1332c537d5cd6a37974dff83d28718954 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:50:22 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:50:22.428743748Z" level=info msg="ignoring event" container=6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:50:22 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:50:22.487508948Z" level=info msg="ignoring event" container=5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:50:22 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:50:22.563447368Z" level=info msg="ignoring event" container=dce94edaac03bb3e640dafe4ce1ba4c623b72547b4b00e000ee2e5a7a011718a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:50:22 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:50:22.643992334Z" level=info msg="ignoring event" container=60418f53397fe1557c1cf552ba01c487ad00bab55167dadb9d28aecdcee36ee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	83dfa51e56c75       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            5 minutes ago       Exited              gadget                                   6                   3e8853cc705e2       gadget-8z8sg
	ba2c3bb63a208       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   4803c0b83d910       gcp-auth-89d5ffd79-4blxd
	f0fb36c0c6ad4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   5597a5300ac16       csi-hostpathplugin-h6hck
	6ed9ccd8da247       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   5597a5300ac16       csi-hostpathplugin-h6hck
	6d7f7d70cac5a       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   5597a5300ac16       csi-hostpathplugin-h6hck
	088fd20c2fd81       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   5597a5300ac16       csi-hostpathplugin-h6hck
	2742add6570d1       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   5597a5300ac16       csi-hostpathplugin-h6hck
	4e0081f0d620d       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   f054130d78248       csi-hostpath-attacher-0
	7dcaea766ee6d       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   7b7566773ebcf       csi-hostpath-resizer-0
	27bf14a8312f3       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   5597a5300ac16       csi-hostpathplugin-h6hck
	a61ab9d35c13e       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   2d8bdd3237cac       snapshot-controller-56fcc65765-5hqd5
	4e35ab28cefbb       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   d3ff3d7efad58       snapshot-controller-56fcc65765-68n75
	583b4557dd1dd       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   8b9bffc240e92       local-path-provisioner-86d989889c-sc47k
	a791577cca5a2       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   f063f26a4f75b       yakd-dashboard-67d98fc6b-w54xd
	9d38b2469ff46       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   f26ed69a5918e       metrics-server-84c5f94fbc-kfb6d
	a943a4924081c       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               11 minutes ago      Running             cloud-spanner-emulator                   0                   413e871ef3e48       cloud-spanner-emulator-5b584cc74-th77b
	b503b99111eed       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   e59695eca1734       nvidia-device-plugin-daemonset-2wnr8
	73cd2003c8e68       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   402bc2953059c       coredns-7c65d6cfc9-48st5
	76b81b284c2d7       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   bacf4d8ae21fe       storage-provisioner
	dd68ad72ce52b       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   c9c13e4edaac6       kube-proxy-k9p26
	b90fe5d50acd6       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   b92b01e542fa3       etcd-ubuntu-20-agent-2
	7e500a67c4d3d       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   c099feaad67e2       kube-controller-manager-ubuntu-20-agent-2
	b014ea59d4af9       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   b82d91d215f9a       kube-apiserver-ubuntu-20-agent-2
	64eba13525584       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   80854062edf18       kube-scheduler-ubuntu-20-agent-2
	
	
	==> coredns [73cd2003c8e6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:43477 - 59171 "HINFO IN 4658389154335151315.896572764661331685. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.203543242s
	[INFO] 10.244.0.24:48428 - 48264 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000296247s
	[INFO] 10.244.0.24:41913 - 5315 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000370033s
	[INFO] 10.244.0.24:50818 - 8056 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100574s
	[INFO] 10.244.0.24:51596 - 47262 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137289s
	[INFO] 10.244.0.24:42468 - 3678 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000130623s
	[INFO] 10.244.0.24:52371 - 37731 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000149165s
	[INFO] 10.244.0.24:36120 - 38242 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003491113s
	[INFO] 10.244.0.24:45515 - 3155 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004191745s
	[INFO] 10.244.0.24:47392 - 27648 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.002971199s
	[INFO] 10.244.0.24:39136 - 19472 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003506534s
	[INFO] 10.244.0.24:38852 - 12519 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002973071s
	[INFO] 10.244.0.24:35810 - 41647 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003102396s
	[INFO] 10.244.0.24:50527 - 178 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.002117937s
	[INFO] 10.244.0.24:34441 - 42306 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002729227s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T23_39_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 23:38:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 23:50:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 23:46:11 +0000   Mon, 23 Sep 2024 23:38:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 23:46:11 +0000   Mon, 23 Sep 2024 23:38:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 23:46:11 +0000   Mon, 23 Sep 2024 23:38:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 23:46:11 +0000   Mon, 23 Sep 2024 23:38:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    38b63acc-66f8-4c7e-8578-c838561f2860
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     cloud-spanner-emulator-5b584cc74-th77b       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-8z8sg                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-4blxd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-48st5                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-h6hck                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-k9p26                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-kfb6d              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-2wnr8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-5hqd5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-68n75         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-sc47k      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-w54xd               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a 23 f2 19 24 ac 08 06
	[  +1.050755] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 96 3d 7e 5a ac 7c 08 06
	[  +0.013452] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 6a cb ea 95 b1 08 06
	[  +2.558803] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 ad c2 43 0b d5 08 06
	[  +1.672150] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae e6 39 2a 23 d6 08 06
	[  +1.880477] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 fc 2b 31 d0 80 08 06
	[  +4.877626] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da c3 da f9 9c 27 08 06
	[  +0.139606] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 fd 60 1e e2 c6 08 06
	[  +0.440564] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a a7 bf ff 27 58 08 06
	[Sep23 23:40] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 e4 a9 71 3c ed 08 06
	[  +0.097865] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e 02 f9 bf 0e c3 08 06
	[ +10.876730] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 1f fc 93 70 85 08 06
	[  +0.000480] IPv4: martian source 10.244.0.24 from 10.244.0.6, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 36 d5 1e 0d 11 a7 08 06
	
	
	==> etcd [b90fe5d50acd] <==
	{"level":"info","ts":"2024-09-23T23:38:58.583629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-23T23:38:58.583656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
	{"level":"info","ts":"2024-09-23T23:38:58.583669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
	{"level":"info","ts":"2024-09-23T23:38:58.583680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-23T23:38:58.583689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-23T23:38:58.583714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-23T23:38:58.584779Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T23:38:58.584776Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T23:38:58.584814Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T23:38:58.584845Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T23:38:58.584982Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T23:38:58.585012Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T23:38:58.585539Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T23:38:58.585892Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T23:38:58.585961Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T23:38:58.587279Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T23:38:58.587395Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T23:38:58.588492Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-23T23:38:58.588782Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-23T23:39:14.105960Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.013975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:39:14.106034Z","caller":"traceutil/trace.go:171","msg":"trace[995397433] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:815; }","duration":"124.13314ms","start":"2024-09-23T23:39:13.981888Z","end":"2024-09-23T23:39:14.106021Z","steps":["trace[995397433] 'range keys from in-memory index tree'  (duration: 123.943764ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T23:39:14.481304Z","caller":"traceutil/trace.go:171","msg":"trace[1093074071] transaction","detail":"{read_only:false; response_revision:816; number_of_response:1; }","duration":"109.581542ms","start":"2024-09-23T23:39:14.371708Z","end":"2024-09-23T23:39:14.481290Z","steps":["trace[1093074071] 'process raft request'  (duration: 109.483898ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T23:48:58.602336Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1686}
	{"level":"info","ts":"2024-09-23T23:48:58.626172Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1686,"took":"23.369626ms","hash":3865861227,"current-db-size-bytes":8175616,"current-db-size":"8.2 MB","current-db-size-in-use-bytes":4337664,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2024-09-23T23:48:58.626214Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3865861227,"revision":1686,"compact-revision":-1}
	
	
	==> gcp-auth [ba2c3bb63a20] <==
	2024/09/23 23:40:30 GCP Auth Webhook started!
	2024/09/23 23:40:47 Ready to marshal response ...
	2024/09/23 23:40:47 Ready to write response ...
	2024/09/23 23:40:48 Ready to marshal response ...
	2024/09/23 23:40:48 Ready to write response ...
	2024/09/23 23:41:09 Ready to marshal response ...
	2024/09/23 23:41:09 Ready to write response ...
	2024/09/23 23:41:09 Ready to marshal response ...
	2024/09/23 23:41:09 Ready to write response ...
	2024/09/23 23:41:10 Ready to marshal response ...
	2024/09/23 23:41:10 Ready to write response ...
	2024/09/23 23:49:22 Ready to marshal response ...
	2024/09/23 23:49:22 Ready to write response ...
	
	
	==> kernel <==
	 23:50:23 up 32 min,  0 users,  load average: 0.31, 0.37, 0.33
	Linux ubuntu-20-agent-2 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [b014ea59d4af] <==
	W0923 23:39:48.845664       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.177.38:443: connect: connection refused
	W0923 23:39:55.715515       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.61.13:443: connect: connection refused
	E0923 23:39:55.715548       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.61.13:443: connect: connection refused" logger="UnhandledError"
	W0923 23:40:17.725452       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.61.13:443: connect: connection refused
	E0923 23:40:17.725493       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.61.13:443: connect: connection refused" logger="UnhandledError"
	W0923 23:40:17.738595       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.61.13:443: connect: connection refused
	E0923 23:40:17.738636       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.61.13:443: connect: connection refused" logger="UnhandledError"
	I0923 23:40:47.526712       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0923 23:40:47.542650       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0923 23:40:59.911435       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0923 23:40:59.925747       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0923 23:41:00.028526       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 23:41:00.055861       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 23:41:00.055913       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0923 23:41:00.064147       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 23:41:00.192022       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0923 23:41:00.205463       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0923 23:41:00.256061       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0923 23:41:00.969621       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0923 23:41:01.070881       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0923 23:41:01.081631       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0923 23:41:01.097418       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0923 23:41:01.256935       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0923 23:41:01.302930       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0923 23:41:01.450389       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [7e500a67c4d3] <==
	W0923 23:48:59.530622       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:48:59.530662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:49:10.809457       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:49:10.809496       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:49:16.655560       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:49:16.655604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:49:18.335185       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:49:18.335224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:49:25.503907       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:49:25.503972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:49:34.089319       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:49:34.089362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:49:42.072793       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:49:42.072830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:49:43.683958       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:49:43.683996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:49:49.657904       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:49:49.657947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:50:06.095992       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:50:06.096033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:50:07.830773       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:50:07.830812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:50:17.517197       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:50:17.517237       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 23:50:22.394710       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="10.181µs"
	
	
	==> kube-proxy [dd68ad72ce52] <==
	I0923 23:39:08.551651       1 server_linux.go:66] "Using iptables proxy"
	I0923 23:39:08.716242       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0923 23:39:08.716314       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 23:39:08.812648       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 23:39:08.812729       1 server_linux.go:169] "Using iptables Proxier"
	I0923 23:39:08.821062       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 23:39:08.821382       1 server.go:483] "Version info" version="v1.31.1"
	I0923 23:39:08.821403       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 23:39:08.827527       1 config.go:199] "Starting service config controller"
	I0923 23:39:08.827543       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 23:39:08.827573       1 config.go:105] "Starting endpoint slice config controller"
	I0923 23:39:08.827579       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 23:39:08.828068       1 config.go:328] "Starting node config controller"
	I0923 23:39:08.828077       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 23:39:08.930364       1 shared_informer.go:320] Caches are synced for node config
	I0923 23:39:08.930404       1 shared_informer.go:320] Caches are synced for service config
	I0923 23:39:08.930456       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [64eba1352558] <==
	E0923 23:38:59.735444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:59.735500       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 23:38:59.735534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0923 23:38:59.734654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:59.735630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 23:38:59.735661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:59.735734       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0923 23:38:59.735802       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 23:38:59.735829       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	E0923 23:38:59.735764       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:59.735961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 23:38:59.735985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:59.736166       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 23:38:59.736188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:59.736415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0923 23:38:59.736426       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 23:38:59.736439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:59.736450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 23:38:59.736449       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0923 23:38:59.736468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:00.600063       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 23:39:00.600119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 23:39:00.607286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 23:39:00.607319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 23:39:01.333868       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Thu 2024-08-15 05:18:14 UTC, end at Mon 2024-09-23 23:50:23 UTC. --
	Sep 23 23:50:04 ubuntu-20-agent-2 kubelet[19859]: E0923 23:50:04.952768   19859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="375e9c91-f2dd-4d52-a086-6895e79b1d1e"
	Sep 23 23:50:13 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:13.951497   19859 scope.go:117] "RemoveContainer" containerID="83dfa51e56c75dfb4b0faefa8ea4ceae7d1479e97caeb54566a9ddce5e26bf57"
	Sep 23 23:50:13 ubuntu-20-agent-2 kubelet[19859]: E0923 23:50:13.951709   19859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-8z8sg_gadget(d327e609-6b19-4431-b38f-9029fefa34a3)\"" pod="gadget/gadget-8z8sg" podUID="d327e609-6b19-4431-b38f-9029fefa34a3"
	Sep 23 23:50:13 ubuntu-20-agent-2 kubelet[19859]: E0923 23:50:13.953344   19859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="977ee237-4755-42a9-bdfb-c1d58f1158cf"
	Sep 23 23:50:17 ubuntu-20-agent-2 kubelet[19859]: E0923 23:50:17.953270   19859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="375e9c91-f2dd-4d52-a086-6895e79b1d1e"
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.381606   19859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/977ee237-4755-42a9-bdfb-c1d58f1158cf-gcp-creds\") pod \"977ee237-4755-42a9-bdfb-c1d58f1158cf\" (UID: \"977ee237-4755-42a9-bdfb-c1d58f1158cf\") "
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.381661   19859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkdpm\" (UniqueName: \"kubernetes.io/projected/977ee237-4755-42a9-bdfb-c1d58f1158cf-kube-api-access-hkdpm\") pod \"977ee237-4755-42a9-bdfb-c1d58f1158cf\" (UID: \"977ee237-4755-42a9-bdfb-c1d58f1158cf\") "
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.381717   19859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/977ee237-4755-42a9-bdfb-c1d58f1158cf-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "977ee237-4755-42a9-bdfb-c1d58f1158cf" (UID: "977ee237-4755-42a9-bdfb-c1d58f1158cf"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.384128   19859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977ee237-4755-42a9-bdfb-c1d58f1158cf-kube-api-access-hkdpm" (OuterVolumeSpecName: "kube-api-access-hkdpm") pod "977ee237-4755-42a9-bdfb-c1d58f1158cf" (UID: "977ee237-4755-42a9-bdfb-c1d58f1158cf"). InnerVolumeSpecName "kube-api-access-hkdpm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.481964   19859 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/977ee237-4755-42a9-bdfb-c1d58f1158cf-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.482009   19859 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hkdpm\" (UniqueName: \"kubernetes.io/projected/977ee237-4755-42a9-bdfb-c1d58f1158cf-kube-api-access-hkdpm\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.784279   19859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md6f6\" (UniqueName: \"kubernetes.io/projected/1fd26fe1-569a-41d8-bd27-41ea6d31c232-kube-api-access-md6f6\") pod \"1fd26fe1-569a-41d8-bd27-41ea6d31c232\" (UID: \"1fd26fe1-569a-41d8-bd27-41ea6d31c232\") "
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.786480   19859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd26fe1-569a-41d8-bd27-41ea6d31c232-kube-api-access-md6f6" (OuterVolumeSpecName: "kube-api-access-md6f6") pod "1fd26fe1-569a-41d8-bd27-41ea6d31c232" (UID: "1fd26fe1-569a-41d8-bd27-41ea6d31c232"). InnerVolumeSpecName "kube-api-access-md6f6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.840341   19859 scope.go:117] "RemoveContainer" containerID="5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760"
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.857549   19859 scope.go:117] "RemoveContainer" containerID="5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760"
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: E0923 23:50:22.858515   19859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760" containerID="5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760"
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.858567   19859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760"} err="failed to get container status \"5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760\": rpc error: code = Unknown desc = Error response from daemon: No such container: 5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760"
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.858597   19859 scope.go:117] "RemoveContainer" containerID="6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89"
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.875825   19859 scope.go:117] "RemoveContainer" containerID="6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89"
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: E0923 23:50:22.876654   19859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89" containerID="6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89"
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.876698   19859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89"} err="failed to get container status \"6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89\": rpc error: code = Unknown desc = Error response from daemon: No such container: 6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89"
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.885040   19859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8lkw\" (UniqueName: \"kubernetes.io/projected/b1bc2a37-dafc-48f7-94a2-b80e57e12b9a-kube-api-access-f8lkw\") pod \"b1bc2a37-dafc-48f7-94a2-b80e57e12b9a\" (UID: \"b1bc2a37-dafc-48f7-94a2-b80e57e12b9a\") "
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.885118   19859 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-md6f6\" (UniqueName: \"kubernetes.io/projected/1fd26fe1-569a-41d8-bd27-41ea6d31c232-kube-api-access-md6f6\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.886921   19859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1bc2a37-dafc-48f7-94a2-b80e57e12b9a-kube-api-access-f8lkw" (OuterVolumeSpecName: "kube-api-access-f8lkw") pod "b1bc2a37-dafc-48f7-94a2-b80e57e12b9a" (UID: "b1bc2a37-dafc-48f7-94a2-b80e57e12b9a"). InnerVolumeSpecName "kube-api-access-f8lkw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.985947   19859 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-f8lkw\" (UniqueName: \"kubernetes.io/projected/b1bc2a37-dafc-48f7-94a2-b80e57e12b9a-kube-api-access-f8lkw\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	
	
	==> storage-provisioner [76b81b284c2d] <==
	I0923 23:39:09.020188       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 23:39:09.034064       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 23:39:09.034109       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 23:39:09.042235       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 23:39:09.042398       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_55db1eb7-47c2-43d6-b4c3-9de0248b2260!
	I0923 23:39:09.046263       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"08d74b23-1e4a-4198-9169-fe387f5c40cf", APIVersion:"v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_55db1eb7-47c2-43d6-b4c3-9de0248b2260 became leader
	I0923 23:39:09.142890       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_55db1eb7-47c2-43d6-b4c3-9de0248b2260!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-2/10.138.0.48
	Start Time:       Mon, 23 Sep 2024 23:41:09 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hq8sm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hq8sm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-2
	  Normal   Pulling    7m41s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m40s (x4 over 9m13s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m40s (x4 over 9m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m27s (x6 over 9m13s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m8s (x20 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.75s)

                                                
                                    

Test pass (110/167)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 2.15
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.1
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 0.95
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.54
22 TestOffline 74.38
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 101.04
29 TestAddons/serial/Volcano 38.44
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 10.43
36 TestAddons/parallel/MetricsServer 5.35
38 TestAddons/parallel/CSI 46.25
39 TestAddons/parallel/Headlamp 14.83
40 TestAddons/parallel/CloudSpanner 5.25
42 TestAddons/parallel/NvidiaDevicePlugin 5.22
43 TestAddons/parallel/Yakd 10.39
44 TestAddons/StoppedEnableDisable 10.66
46 TestCertExpiration 227.11
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 28.83
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 24.37
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 0.06
64 TestFunctional/serial/MinikubeKubectlCmd 0.1
65 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
66 TestFunctional/serial/ExtraConfig 36.32
67 TestFunctional/serial/ComponentHealth 0.06
68 TestFunctional/serial/LogsCmd 0.77
69 TestFunctional/serial/LogsFileCmd 0.81
70 TestFunctional/serial/InvalidService 4.54
72 TestFunctional/parallel/ConfigCmd 0.25
73 TestFunctional/parallel/DashboardCmd 7.66
74 TestFunctional/parallel/DryRun 0.16
75 TestFunctional/parallel/InternationalLanguage 0.08
76 TestFunctional/parallel/StatusCmd 0.4
79 TestFunctional/parallel/ProfileCmd/profile_not_create 0.21
80 TestFunctional/parallel/ProfileCmd/profile_list 0.2
81 TestFunctional/parallel/ProfileCmd/profile_json_output 0.19
83 TestFunctional/parallel/ServiceCmd/DeployApp 9.14
84 TestFunctional/parallel/ServiceCmd/List 0.33
85 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
86 TestFunctional/parallel/ServiceCmd/HTTPS 0.15
87 TestFunctional/parallel/ServiceCmd/Format 0.15
88 TestFunctional/parallel/ServiceCmd/URL 0.14
89 TestFunctional/parallel/ServiceCmdConnect 6.29
90 TestFunctional/parallel/AddonsCmd 0.11
91 TestFunctional/parallel/PersistentVolumeClaim 23.84
94 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.25
95 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
97 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.17
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
99 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
103 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
106 TestFunctional/parallel/MySQL 21.78
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 14.3
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 13.68
115 TestFunctional/parallel/NodeLabels 0.06
119 TestFunctional/parallel/Version/short 0.04
120 TestFunctional/parallel/Version/components 0.38
121 TestFunctional/parallel/License 0.26
122 TestFunctional/delete_echo-server_images 0.03
123 TestFunctional/delete_my-image_image 0.02
124 TestFunctional/delete_minikube_cached_images 0.02
129 TestImageBuild/serial/Setup 14.31
130 TestImageBuild/serial/NormalBuild 1.54
131 TestImageBuild/serial/BuildWithBuildArg 0.8
132 TestImageBuild/serial/BuildWithDockerIgnore 0.59
133 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.57
137 TestJSONOutput/start/Command 26.5
138 TestJSONOutput/start/Audit 0
140 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
141 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
143 TestJSONOutput/pause/Command 0.5
144 TestJSONOutput/pause/Audit 0
146 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/unpause/Command 0.43
150 TestJSONOutput/unpause/Audit 0
152 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/stop/Command 5.31
156 TestJSONOutput/stop/Audit 0
158 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
160 TestErrorJSONOutput 0.19
165 TestMainNoArgs 0.04
166 TestMinikubeProfile 33.24
174 TestPause/serial/Start 24.02
175 TestPause/serial/SecondStartNoReconfiguration 28.55
176 TestPause/serial/Pause 0.49
177 TestPause/serial/VerifyStatus 0.13
178 TestPause/serial/Unpause 0.39
179 TestPause/serial/PauseAgain 0.53
180 TestPause/serial/DeletePaused 1.76
181 TestPause/serial/VerifyDeletedResources 0.06
195 TestRunningBinaryUpgrade 67.68
197 TestStoppedBinaryUpgrade/Setup 0.46
198 TestStoppedBinaryUpgrade/Upgrade 49.23
199 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
200 TestKubernetesUpgrade 307.25
x
+
TestDownloadOnly/v1.20.0/json-events (2.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (2.144993921s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (2.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (53.46116ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 23:37:31
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 23:37:31.385255   14365 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:37:31.385354   14365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:37:31.385358   14365 out.go:358] Setting ErrFile to fd 2...
	I0923 23:37:31.385362   14365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:37:31.385538   14365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7453/.minikube/bin
	W0923 23:37:31.385674   14365 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19696-7453/.minikube/config/config.json: open /home/jenkins/minikube-integration/19696-7453/.minikube/config/config.json: no such file or directory
	I0923 23:37:31.386205   14365 out.go:352] Setting JSON to true
	I0923 23:37:31.387118   14365 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1200,"bootTime":1727133451,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:37:31.387213   14365 start.go:139] virtualization: kvm guest
	I0923 23:37:31.389614   14365 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 23:37:31.389715   14365 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19696-7453/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 23:37:31.389752   14365 notify.go:220] Checking for updates...
	I0923 23:37:31.391098   14365 out.go:169] MINIKUBE_LOCATION=19696
	I0923 23:37:31.392543   14365 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:37:31.393960   14365 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19696-7453/kubeconfig
	I0923 23:37:31.395324   14365 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7453/.minikube
	I0923 23:37:31.396679   14365 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (0.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
--- PASS: TestDownloadOnly/v1.31.1/json-events (0.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (57.261824ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 23:37:33
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 23:37:33.801508   14519 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:37:33.801755   14519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:37:33.801765   14519 out.go:358] Setting ErrFile to fd 2...
	I0923 23:37:33.801771   14519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:37:33.801946   14519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7453/.minikube/bin
	I0923 23:37:33.802457   14519 out.go:352] Setting JSON to true
	I0923 23:37:33.803253   14519 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1203,"bootTime":1727133451,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:37:33.803337   14519 start.go:139] virtualization: kvm guest
	I0923 23:37:33.805228   14519 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 23:37:33.805305   14519 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19696-7453/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 23:37:33.805348   14519 notify.go:220] Checking for updates...
	I0923 23:37:33.806605   14519 out.go:169] MINIKUBE_LOCATION=19696
	I0923 23:37:33.808025   14519 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:37:33.809292   14519 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19696-7453/kubeconfig
	I0923 23:37:33.810599   14519 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7453/.minikube
	I0923 23:37:33.811894   14519 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 23:37:35.224057   14353 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:36161 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (74.38s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (1m12.864418547s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.512747008s)
--- PASS: TestOffline (74.38s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (45.671176ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (44.941138ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (101.04s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm: (1m41.042050803s)
--- PASS: TestAddons/Setup (101.04s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.44s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 8.910879ms
addons_test.go:843: volcano-admission stabilized in 8.945066ms
addons_test.go:835: volcano-scheduler stabilized in 9.032177ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-j4mpl" [f39ca8a7-22cd-4ead-8827-c51a050d5652] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003257018s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-csbwx" [c2aac002-950c-4403-b09e-61c3072fa379] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00293665s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-zv2x5" [132dadb4-d5ec-4542-9c71-a1f9c9464810] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002798612s
addons_test.go:870: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [7f711b16-b335-4cb6-89d0-ae80e6a690e2] Pending
helpers_test.go:344: "test-job-nginx-0" [7f711b16-b335-4cb6-89d0-ae80e6a690e2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [7f711b16-b335-4cb6-89d0-ae80e6a690e2] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004562016s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.113882879s)
--- PASS: TestAddons/serial/Volcano (38.44s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.43s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8z8sg" [d327e609-6b19-4431-b38f-9029fefa34a3] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003352426s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.42502792s)
--- PASS: TestAddons/parallel/InspektorGadget (10.43s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 1.791226ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-kfb6d" [ac1f00cd-4fff-4140-a995-8627eed03faf] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003428586s
addons_test.go:413: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I0923 23:50:39.460784   14353 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 23:50:39.464633   14353 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 23:50:39.464656   14353 kapi.go:107] duration metric: took 3.888899ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 3.909635ms
addons_test.go:508: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [abd66e8f-0adc-408c-b93b-c3f2f8c540c5] Pending
helpers_test.go:344: "task-pv-pod" [abd66e8f-0adc-408c-b93b-c3f2f8c540c5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [abd66e8f-0adc-408c-b93b-c3f2f8c540c5] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003757302s
addons_test.go:528: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c25de1ca-372c-4023-84b0-f56b32feab34] Pending
helpers_test.go:344: "task-pv-pod-restore" [c25de1ca-372c-4023-84b0-f56b32feab34] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c25de1ca-372c-4023-84b0-f56b32feab34] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003521566s
addons_test.go:570: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.258801996s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-8k9j5" [0f96f558-e129-4c59-8db8-55c602c5bbc8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-8k9j5" [0f96f558-e129-4c59-8db8-55c602c5bbc8] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003642188s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.385099273s)
--- PASS: TestAddons/parallel/Headlamp (14.83s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-th77b" [82405cbc-e9f7-4223-8967-579dbfbfe9a6] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003309616s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.22s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2wnr8" [1965e7c3-c30f-45a0-9555-6b2c4506d582] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003932304s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.22s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-w54xd" [97f524e2-ff03-4e0f-93e9-7b1eea6ddd82] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003651555s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.387370939s)
--- PASS: TestAddons/parallel/Yakd (10.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.66s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.380727817s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.66s)

                                                
                                    
x
+
TestCertExpiration (227.11s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (13.990494797s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (31.566790219s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.542525232s)
--- PASS: TestCertExpiration (227.11s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19696-7453/.minikube/files/etc/test/nested/copy/14353/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (28.83s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (28.828734292s)
--- PASS: TestFunctional/serial/StartWithProxy (28.83s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (24.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 23:56:29.218516   14353 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (24.365568021s)
functional_test.go:663: soft start took 24.366262748s for "minikube" cluster.
I0923 23:56:53.584427   14353 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (24.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.31750241s)
functional_test.go:761: restart took 36.31760694s for "minikube" cluster.
I0923 23:57:30.205117   14353 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (36.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.77s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd1956997850/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.81s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.54s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (157.252438ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://10.138.0.48:30726 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context minikube delete -f testdata/invalidsvc.yaml: (1.219444424s)
--- PASS: TestFunctional/serial/InvalidService (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (39.669415ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (40.693826ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/23 23:57:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 48998: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (76.742464ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7453/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7453/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 23:57:44.345703   49368 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:57:44.345839   49368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:57:44.345849   49368 out.go:358] Setting ErrFile to fd 2...
	I0923 23:57:44.345856   49368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:57:44.346053   49368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7453/.minikube/bin
	I0923 23:57:44.346590   49368 out.go:352] Setting JSON to false
	I0923 23:57:44.347503   49368 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2413,"bootTime":1727133451,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:57:44.347677   49368 start.go:139] virtualization: kvm guest
	I0923 23:57:44.349831   49368 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 23:57:44.351225   49368 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19696-7453/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 23:57:44.351239   49368 notify.go:220] Checking for updates...
	I0923 23:57:44.351272   49368 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 23:57:44.352596   49368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:57:44.353933   49368 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7453/kubeconfig
	I0923 23:57:44.355207   49368 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7453/.minikube
	I0923 23:57:44.356512   49368 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 23:57:44.357727   49368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 23:57:44.359353   49368 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 23:57:44.359667   49368 exec_runner.go:51] Run: systemctl --version
	I0923 23:57:44.362296   49368 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:57:44.373851   49368 out.go:177] * Using the none driver based on existing profile
	I0923 23:57:44.375247   49368 start.go:297] selected driver: none
	I0923 23:57:44.375265   49368 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:57:44.375415   49368 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 23:57:44.375443   49368 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0923 23:57:44.376008   49368 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0923 23:57:44.378077   49368 out.go:201] 
	W0923 23:57:44.379246   49368 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 23:57:44.380431   49368 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (80.090822ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7453/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7453/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 23:57:44.503499   49398 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:57:44.503609   49398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:57:44.503617   49398 out.go:358] Setting ErrFile to fd 2...
	I0923 23:57:44.503621   49398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:57:44.503936   49398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7453/.minikube/bin
	I0923 23:57:44.504439   49398 out.go:352] Setting JSON to false
	I0923 23:57:44.505357   49398 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2413,"bootTime":1727133451,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:57:44.505453   49398 start.go:139] virtualization: kvm guest
	I0923 23:57:44.507448   49398 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0923 23:57:44.509233   49398 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19696-7453/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 23:57:44.509284   49398 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 23:57:44.509317   49398 notify.go:220] Checking for updates...
	I0923 23:57:44.511891   49398 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:57:44.513208   49398 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7453/kubeconfig
	I0923 23:57:44.514508   49398 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7453/.minikube
	I0923 23:57:44.515715   49398 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 23:57:44.517047   49398 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 23:57:44.518715   49398 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 23:57:44.519022   49398 exec_runner.go:51] Run: systemctl --version
	I0923 23:57:44.521642   49398 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:57:44.533025   49398 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0923 23:57:44.534384   49398 start.go:297] selected driver: none
	I0923 23:57:44.534400   49398 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:57:44.534496   49398 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 23:57:44.534520   49398 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0923 23:57:44.534822   49398 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0923 23:57:44.537151   49398 out.go:201] 
	W0923 23:57:44.538422   49398 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 23:57:44.539857   49398 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "156.235605ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.565608ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "149.317614ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "43.476849ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-kzz6l" [c0ba6a5f-1093-4a58-8fdf-5f96d5602041] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-kzz6l" [c0ba6a5f-1093-4a58-8fdf-5f96d5602041] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003588754s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "321.137627ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.138.0.48:31052
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.138.0.48:31052
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-94d9v" [28f28234-2d19-4ab8-bd88-77d4a4dd8207] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-94d9v" [28f28234-2d19-4ab8-bd88-77d4a4dd8207] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.004103621s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.138.0.48:32409
functional_test.go:1675: http://10.138.0.48:32409: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-94d9v

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.138.0.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.138.0.48:32409
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.29s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cba358db-b394-41c0-a443-ed36f89ded09] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003917134s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [98fc8c05-1e34-4604-bcdd-43c924b4b9e0] Pending
helpers_test.go:344: "sp-pod" [98fc8c05-1e34-4604-bcdd-43c924b4b9e0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [98fc8c05-1e34-4604-bcdd-43c924b4b9e0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003208559s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml: (1.168938612s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [442ba8ee-e5b9-4bba-bfb4-15e1deeafda0] Pending
helpers_test.go:344: "sp-pod" [442ba8ee-e5b9-4bba-bfb4-15e1deeafda0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [442ba8ee-e5b9-4bba-bfb4-15e1deeafda0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003549118s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.84s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 51083: operation not permitted
helpers_test.go:508: unable to kill pid 51036: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context minikube apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [69a9fba9-00ac-4694-a231-3e568b1e93e3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [69a9fba9-00ac-4694-a231-3e568b1e93e3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003746589s
I0923 23:58:35.455110   14353 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context minikube get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.3.118 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-vmtk9" [a4f0bb0f-fff3-4458-bc61-d699c700656a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-vmtk9" [a4f0bb0f-fff3-4458-bc61-d699c700656a] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.003063238s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-vmtk9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-vmtk9 -- mysql -ppassword -e "show databases;": exit status 1 (106.50914ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 23:58:53.927363   14353 retry.go:31] will retry after 1.453720445s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-vmtk9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-vmtk9 -- mysql -ppassword -e "show databases;": exit status 1 (110.59604ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 23:58:55.492695   14353 retry.go:31] will retry after 1.848451072s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-vmtk9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.78s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.304147777s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (13.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.680253719s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (13.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.309167053s)
--- PASS: TestImageBuild/serial/Setup (14.31s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.542108472s)
--- PASS: TestImageBuild/serial/NormalBuild (1.54s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.59s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.57s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (26.5s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (26.499657079s)
--- PASS: TestJSONOutput/start/Command (26.50s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.31s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (5.30894284s)
--- PASS: TestJSONOutput/stop/Command (5.31s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.766569ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c13e3d21-9ffb-4a70-b557-abe1ff86b194","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"65403c8d-cb61-420a-9e71-eb15d256d23f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19696"}}
	{"specversion":"1.0","id":"ea4fa276-ed18-4395-85d8-4db37bceed46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d12826d8-421d-40dc-bfda-01f8b32712fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19696-7453/kubeconfig"}}
	{"specversion":"1.0","id":"c1e4d7d2-1bf4-46f2-8f98-445a6cbe9ca1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7453/.minikube"}}
	{"specversion":"1.0","id":"06039158-71b6-4116-aad5-fffea930fc9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9995e0c5-43d6-4046-a64a-70cc55caa73a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9594080f-d71a-410a-9655-6c5a57fe7714","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (33.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.29811326s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (18.081012655s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.277783048s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (33.24s)

                                                
                                    
x
+
TestPause/serial/Start (24.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (24.015842325s)
--- PASS: TestPause/serial/Start (24.02s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.55s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (28.552210498s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.55s)

                                                
                                    
x
+
TestPause/serial/Pause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.49s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (127.37237ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.39s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.39s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.53s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.53s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.76s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.755275922s)
--- PASS: TestPause/serial/DeletePaused (1.76s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (67.68s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2632831726 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2632831726 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (29.365609677s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (34.398716815s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.046842549s)
--- PASS: TestRunningBinaryUpgrade (67.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (49.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2833485393 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2833485393 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (13.557615487s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2833485393 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2833485393 -p minikube stop: (23.648266483s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.019893387s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (49.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestKubernetesUpgrade (307.25s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (27.942048482s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.278226546s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (71.146085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m18.213410721s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (66.192102ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7453/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7453/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.399486206s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.220610052s)
--- PASS: TestKubernetesUpgrade (307.25s)

                                                
                                    

Test skip (56/167)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
37 TestAddons/parallel/Olm 0
41 TestAddons/parallel/LocalPath 0
45 TestCertOptions 0
47 TestDockerFlags 0
48 TestForceSystemdFlag 0
49 TestForceSystemdEnv 0
50 TestDockerEnvContainerd 0
51 TestKVMDriverInstallOrUpdate 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
54 TestErrorSpam 0
63 TestFunctional/serial/CacheCmd 0
77 TestFunctional/parallel/MountCmd 0
100 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
102 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
104 TestFunctional/parallel/SSHCmd 0
105 TestFunctional/parallel/CpCmd 0
107 TestFunctional/parallel/FileSync 0
108 TestFunctional/parallel/CertSync 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/ImageCommands 0
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0
125 TestGvisorAddon 0
126 TestMultiControlPlane 0
134 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
161 TestKicCustomNetwork 0
162 TestKicExistingNetwork 0
163 TestKicCustomSubnet 0
164 TestKicStaticIP 0
167 TestMountStart 0
168 TestMultiNode 0
169 TestNetworkPlugins 0
170 TestNoKubernetes 0
171 TestChangeNoneUser 0
182 TestPreload 0
183 TestScheduledStopWindows 0
184 TestScheduledStopUnix 0
185 TestSkaffold 0
188 TestStartStop/group/old-k8s-version 0.12
189 TestStartStop/group/newest-cni 0.13
190 TestStartStop/group/default-k8s-diff-port 0.13
191 TestStartStop/group/no-preload 0.12
192 TestStartStop/group/disable-driver-mounts 0.12
193 TestStartStop/group/embed-certs 0.12
194 TestInsufficientStorage 0
201 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:194: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:916: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.12s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.12s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard